Dump on large file systems

Hornet hornetmadness at gmail.com
Mon Aug 15 11:32:03 GMT 2005


On 8/14/05, John Pettitt <jpp at cloudview.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
> 
> 
> I tried to dump a 600gb file system a few days ago and it didn't
> work.  dump went compute bound during phase III and never wrote any
> data to the dump device (this on an up to date  RELENG_5 box).  - is
> this a known problem? Are there any work arounds?
> 
> John
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.1 (MingW32)
> 
> iD8DBQFC/1VpaVyA7PElsKkRAwnlAKCiqEJ5BLoKpHIRCOLMbcSjrpNBjgCgyyZp
> nM+KOXrDZs96+nk7QV6hOCc=
> =7Kv9
> -----END PGP SIGNATURE-----
> 
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
> 

If you are dumping that 660G slice to a file, you will need to split
it up into smaller chuncks.

dump -0auLf - / | split -a4 -b1024m - "patth/to/dump/file."

The above line will create 1G files and append the filename (see the
trailing ".")
eg.. 20050815-root.aaaa
20050815-root.aaab

You can also gzip it, but this makes the backup take a long time.
dump -0auLf - / | gzip | split -a4 -b1024m - "patth/to/dump/file.gz."

-Erik-


More information about the freebsd-questions mailing list