Musings on ZFS Backup strategies

David Magda dmagda at ee.ryerson.ca
Sat Mar 2 00:48:40 UTC 2013


On Mar 1, 2013, at 12:23, Daniel Eischen wrote:

> dump (and ufsdump for our Solaris boxes) _just work_, and we
> can go back many many years and they will still work.  If we
> convert to ZFS, I'm guessing we'll have to do nightly
> incrementals with 'tar' instead of 'dump' as well as doing
> ZFS snapshots for fulls.

Keep some snapshots, and send stuff to tape after a certain amount of time. Most (though not all) restores are usually within "x" weeks, where "x" is a different value for each organization. (Things will be generally asymptotic though.)

So if you keep 1 week worth of snapshots, you'll probably end being able to service (say) 25% of restore requests: the file can be grabbed usually from yesterday's snapshot. If you keep 2 weeks' worth of snapshots, probably catch 50% of requests. 4 weeks will give you 80%; 6 weeks, 90%; 8 weeks, 95%. 

Of course the more snapshots, the more spinning disk you need (using power and generating heat).

Most articles describing backup/restore best practices I've read in the last few years have stated you want to use disk first (snapshots, VTLs, etc.), and then clone to tape after a certain amount of time ("x" weeks). Or rather: disk AND tape, then clone to another tape (so you have two) and purge the disk copy after "x".

So in this instance, keep snapshots around for a little while, and keep doing your tape backups for long-term storage. Also inform people about the .snapshot/ directory so they can possibly do some "self service" in case they fat finger something (quicker for them, and less hassle for help desk/IT).



More information about the freebsd-stable mailing list