Musings on ZFS Backup strategies

Daniel Eischen deischen at freebsd.org
Fri Mar 1 17:23:44 UTC 2013


On Fri, 1 Mar 2013, Ben Morrow wrote:

> Quoth Karl Denninger <karl at denninger.net>:
>> Dabbling with ZFS now, and giving some thought to how to handle backup
>> strategies.
> [...]
>>
>> Take a base snapshot immediately and zfs send it to offline storage.
>> Take an incremental at some interval (appropriate for disaster recovery)
>> and zfs send THAT to stable storage.
>>
>> If I then restore the base and snapshot, I get back to where I was when
>> the latest snapshot was taken.  I don't need to keep the incremental
>> snapshot for longer than it takes to zfs send it, so I can do:
>>
>> zfs snapshot pool/some-filesystem at unique-label
>> zfs send -i pool/some-filesystem at base pool/some-filesystem at unique-label
>> zfs destroy pool/some-filesystem at unique-label
>>
>> and that seems to work (and restore) just fine.
>
> For backup purposes it's worth using the -R and -I options to zfs send
> rather than -i. This will preserve the other snapshots, which can be
> important.
>
>> Am I looking at this the right way here?  Provided that the base backup
>> and incremental are both readable, it appears that I have the disaster
>> case covered, and the online snapshot increments and retention are
>> easily adjusted and cover the "oops" situations without having to resort
>> to the backups at all.
>>
>> This in turn means that keeping more than two incremental dumps offline
>> has little or no value; the second merely being taken to insure that
>> there is always at least one that has been written to completion without
>> error to apply on top of the base.  That in turn makes the backup
>> storage requirement based only on entropy in the filesystem and not time
>> (where the "tower of Hanoi" style dump hierarchy imposed both a time AND
>> entropy cost on backup media.)
>
> No, that's not true. Since you keep taking successive increments from a
> fixed base, the size of those increments will increase over time (each
> increment will include all net filesystem activity since the base
> snapshot). In UFS terms, it's equivalent to always taking level 1 dumps.
> Unlike with UFS, the @base snapshot will also start using increasing
> amounts of space in the source zpool.
>
> I don't know what medium you're backing up to (does anyone use tape any
> more?) but when backing up to disk I much prefer to keep the backup in
> the form of a filesystem rather than as 'zfs send' streams. One reason
> for this is that I believe that new versions of the ZFS code are more
> likely to be able to correctly read old versions of the filesystem than
> old versions of the stream format; this may not be correct any more,
> though.

Yes, we still use a couple of DLT autoloaders and have nightly
incrementals and weekly fulls.  This is the problem I have with
converting to ZFS.  Our typical recovery is when a user says
they need a directory or set of files from a week or two ago.
Using dump from tape, I can easily extract *just* the necessary
files.  I don't need a second system to restore to, so that
I can then extract the file.

dump (and ufsdump for our Solaris boxes) _just work_, and we
can go back many many years and they will still work.  If we
convert to ZFS, I'm guessing we'll have to do nightly
incrementals with 'tar' instead of 'dump' as well as doing
ZFS snapshots for fulls.

This topic is very interesting to me, as we're at the point
now (with Solaris 11 refusing to even boot from anything but
ZFS) that we have to consider ZFS.

-- 
DE


More information about the freebsd-stable mailing list