Dump on large file systems
hornetmadness at gmail.com
Mon Aug 15 11:32:03 GMT 2005
On 8/14/05, John Pettitt <jpp at cloudview.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
> I tried to dump a 600gb file system a few days ago and it didn't
> work. dump went compute bound during phase III and never wrote any
> data to the dump device (this on an up to date RELENG_5 box). - is
> this a known problem? Are there any work arounds?
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.1 (MingW32)
> -----END PGP SIGNATURE-----
> freebsd-questions at freebsd.org mailing list
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
If you are dumping that 660G slice to a file, you will need to split
it up into smaller chuncks.
dump -0auLf - / | split -a4 -b1024m - "patth/to/dump/file."
The above line will create 1G files and append the filename (see the
You can also gzip it, but this makes the backup take a long time.
dump -0auLf - / | gzip | split -a4 -b1024m - "patth/to/dump/file.gz."
More information about the freebsd-questions