Musings on ZFS Backup strategies

Karl Denninger karl at
Sun Mar 3 01:56:47 UTC 2013

Quoth Ben Morrow:
> I don't know what medium you're backing up to (does anyone use tape any
> more?) but when backing up to disk I much prefer to keep the backup in
> the form of a filesystem rather than as 'zfs send' streams. One reason
> for this is that I believe that new versions of the ZFS code are more
> likely to be able to correctly read old versions of the filesystem than
> old versions of the stream format; this may not be correct any more,
> though.
> Another reason is that it means I can do 'rolling snapshot' backups. I
> do an initial dump like this
>     # zpool is my working pool
>     # bakpool is a second pool I am backing up to
>     zfs snapshot -r zpool/fs at dump <>
>     zfs send -R zpool/fs at dump <> | zfs recv -vFd bakpool
> That pipe can obviously go through ssh or whatever to put the backup on
> a different machine. Then to make an increment I roll forward the
> snapshot like this
>     zfs rename -r zpool/fs at dump <> dump-old
>     zfs snapshot -r zpool/fs at dump <>
>     zfs send -R -I @dump-old zpool/fs at dump <> | zfs recv -vFd bakpool
>     zfs destroy -r zpool/fs at dump-old <>
>     zfs destroy -r bakpool/fs at dump-old <>
> (Notice that the increment starts at a snapshot called @dump-old on the
> send side but at a snapshot called @dump on the recv side. ZFS can
> handle this perfectly well, since it identifies snapshots by UUID, and
> will rename the bakpool snapshot as part of the recv.)
> This brings the filesystem on bakpool up to date with the filesystem on
> zpool, including all snapshots, but never creates an increment with more
> than one backup interval's worth of data in. If you want to keep more
> history on the backup pool than the source pool, you can hold off on
> destroying the old snapshots, and instead rename them to something
> unique. (Of course, you could always give them unique names to start
> with, but I find it more convenient not to.)

Uh, I see a potential problem here.

What if the zfs send | zfs recv command fails for some reason before
completion?  I have noted that zfs recv is atomic -- if it fails for any
reason the entire receive is rolled back like it never happened.

But you then destroy the old snapshot, and the next time this runs the
new gets rolled down.  It would appear that there's an increment
missing, never to be seen again.

What gets lost in that circumstance?  Anything changed between the two
times -- and silently at that? (yikes!)

-- Karl Denninger
/The Market Ticker ®/ <>
Cuda Systems LLC

More information about the freebsd-stable mailing list