Musings on ZFS Backup strategies

Ben Morrow ben at
Fri Mar 1 16:50:59 UTC 2013

Quoth Karl Denninger <karl at>:
> Dabbling with ZFS now, and giving some thought to how to handle backup
> strategies.
> Take a base snapshot immediately and zfs send it to offline storage.
> Take an incremental at some interval (appropriate for disaster recovery)
> and zfs send THAT to stable storage.
> If I then restore the base and snapshot, I get back to where I was when
> the latest snapshot was taken.  I don't need to keep the incremental
> snapshot for longer than it takes to zfs send it, so I can do:
> zfs snapshot pool/some-filesystem at unique-label
> zfs send -i pool/some-filesystem at base pool/some-filesystem at unique-label
> zfs destroy pool/some-filesystem at unique-label
> and that seems to work (and restore) just fine.

For backup purposes it's worth using the -R and -I options to zfs send
rather than -i. This will preserve the other snapshots, which can be

> Am I looking at this the right way here?  Provided that the base backup
> and incremental are both readable, it appears that I have the disaster
> case covered, and the online snapshot increments and retention are
> easily adjusted and cover the "oops" situations without having to resort
> to the backups at all.
> This in turn means that keeping more than two incremental dumps offline
> has little or no value; the second merely being taken to insure that
> there is always at least one that has been written to completion without
> error to apply on top of the base.  That in turn makes the backup
> storage requirement based only on entropy in the filesystem and not time
> (where the "tower of Hanoi" style dump hierarchy imposed both a time AND
> entropy cost on backup media.)

No, that's not true. Since you keep taking successive increments from a
fixed base, the size of those increments will increase over time (each
increment will include all net filesystem activity since the base
snapshot). In UFS terms, it's equivalent to always taking level 1 dumps.
Unlike with UFS, the @base snapshot will also start using increasing
amounts of space in the source zpool.

I don't know what medium you're backing up to (does anyone use tape any
more?) but when backing up to disk I much prefer to keep the backup in
the form of a filesystem rather than as 'zfs send' streams. One reason
for this is that I believe that new versions of the ZFS code are more
likely to be able to correctly read old versions of the filesystem than
old versions of the stream format; this may not be correct any more,

Another reason is that it means I can do 'rolling snapshot' backups. I
do an initial dump like this

    # zpool is my working pool
    # bakpool is a second pool I am backing up to

    zfs snapshot -r zpool/fs at dump
    zfs send -R zpool/fs at dump | zfs recv -vFd bakpool

That pipe can obviously go through ssh or whatever to put the backup on
a different machine. Then to make an increment I roll forward the
snapshot like this

    zfs rename -r zpool/fs at dump dump-old
    zfs snapshot -r zpool/fs at dump
    zfs send -R -I @dump-old zpool/fs at dump | zfs recv -vFd bakpool
    zfs destroy -r zpool/fs at dump-old
    zfs destroy -r bakpool/fs at dump-old

(Notice that the increment starts at a snapshot called @dump-old on the
send side but at a snapshot called @dump on the recv side. ZFS can
handle this perfectly well, since it identifies snapshots by UUID, and
will rename the bakpool snapshot as part of the recv.)

This brings the filesystem on bakpool up to date with the filesystem on
zpool, including all snapshots, but never creates an increment with more
than one backup interval's worth of data in. If you want to keep more
history on the backup pool than the source pool, you can hold off on
destroying the old snapshots, and instead rename them to something
unique. (Of course, you could always give them unique names to start
with, but I find it more convenient not to.)


More information about the freebsd-stable mailing list