Backups / Dump etc
john at starfire.mn.org
Fri Jan 14 14:10:19 PST 2005
On Fri, Jan 14, 2005 at 10:49:44PM +0100, Anthony Atkielski wrote:
> John writes:
> J> If you are running FreeBSD 5.x, you get the cool "L" option on
> J> dump which will automatically snapshot the mounted filesystems.
> What exactly is meant by a "snapshot," and how much extra disk space
> does it require when dump runs? I've seen the warnings when I run dump
> on a running multiuser system in 5.x, and I turned on the L option,
> since I'm not hurting on disk space, but I do wonder how much space it
> requires (I can't imagine a snapshot that doesn't require a lot of disk
A snapshot is an abstraction of the state of the filesystem at
the moment that the snapshot session begins. In order to do this,
the system must preserve pre-session data as new data are written
to the filesystem.
Performance and storage are both considerations. The snapshot
collects the pre-session data as new data are written. How much
space it uses depends on how much write activity is going on
in your filesystem - there's no way we can predict that for you.
When you start the snapshot, the only thing in the snap is the
overhead structural data. As writes happen to the filesystem, the
"old" data are written to the snap save area. When something reads
from the snap, it goes to the "real" filesystem if that hasn't
changed, or it pulls from the snap if that corresponding data have
been updated. The longer the snap is active, and the more write
activitity you have, the more "preimage" data will need to be
written to the snap area.
Performance is affected in this way - when a new write comes in,
unless the preimage data are already in the cache buffers, it has
to be read from the standard filesystem and posted to the snap area
before the write can complete. That turns a single write operation
into a read plus two writes. That consumes CPU and SCSI bus
resources, as well as making your spindles more active.
Worst case, if your filesystem was 100% over-written, the snap
would eventually contain all of the original data from the filesystem,
plus the overhead structural information. This should be pathological.
In real environments, the snap area tends to start out with a burst,
and then asymptotically approach a value which is characteristic
of a specific workload. We tend to re-write recent data more
frequently (thing of updates to active inodes, or debugging and
recompiling) and older data tends to get written to less frequently
(by definition- it it was written a lot, it wouldn't be "older").
A change in workload can cause a new burst of snap activity, but
still, the BROAD BRUSH, OVERLY SIMPLIFIED VALUE that you can expect
is 20% for a full-day snapshot on a moderately active system. If
you are only snapping for the duration of the backup, the overhead
could be MUCH lower. Your mileage WILL vary. No guarantees are
implied or expressed.
Of course, if you start multiple snap sessions, the overhead can
eventually be 100% + for each snap session. That's ridiculous
in practice, but it is the theoretical upper limit. The practical
upper limit will entirely depend on your environment.
See? I am still a real engineer! My answer, ultimately, is
john at starfire.MN.ORG
More information about the freebsd-questions