dump reads more than restore writes?

David Gilbert dgilbert at dclg.ca
Mon Jan 8 04:19:26 UTC 2007


>>>>> "Dan" == Dan Nelson <dnelson at allantgroup.com> writes:

Dan> If you have a lot of small files, dump may be rereading directory
Dan> information.  Dump has a cache option that can help, but make
Dan> sure you also dump a snapshot (i.e. always use -L when using -C).

Several people have suggested this, but actually it has the same
behaviour when using -C (I often use -C 32).  This filesystem is not
mounted, so -L is not required, but I do use -L when required.

But the filesystem is 95% full (aren't they all) and the vast majority
of the files on it are "media" files --- ie: movies, mp3's, ISOs,
etc.  Very few files under a meg.  Probably not too many files over
100 meg as most of the files on the disk are in rar format.

... now Azurus (used to obtain most of the files) writes holey files.
One chunk at a time (512k-ish to 4meg-ish).  Those could end up being
not-very-contiguous, but I'd expect them to consist (by majority) of
full filesystem blocks.

The amazing part (to me) is how consistent it is.  If this is not a
reporting error of gstat, it makes dump look _very_ wasteful.  If the
numbers are being reported correctly, it means that dump is reading
600 gig to copy a 200 gig disk.  !?!

Dave.

-- 
============================================================================
|David Gilbert, Independent Contractor.       | Two things can be          |
|Mail:       dave at daveg.ca                    |  equal if and only if they |
|http://daveg.ca                              |   are precisely opposite.  |
=========================================================GLO================


More information about the freebsd-hackers mailing list