Musings on ZFS Backup strategies
dmagda at ee.ryerson.ca
Mon Mar 4 17:05:26 UTC 2013
On Mon, March 4, 2013 11:07, Volodymyr Kostyrko wrote:
> 02.03.2013 03:12, David Magda:
>> There are quite a few scripts out there:
> A lot of them require python or ruby, and none of them manages
> synchronizing snapshots over network.
Yes, but I think it is worth considering the creation of snapshots, and
the transfer of snapshots, as two separate steps. By treating them
independently (perhaps in two different scripts), it helps prevent the
breakage in one from affecting the other.
Snapshots are not backups (IMHO), but they are handy for users and
sysadmins for the simple situations of accidentally files. If your network
access / copying breaks or is slow for some reason, at least you have
simply copies locally. Similarly if you're having issues with the machine
that keeps your remove pool.
By keeping the snapshots going separately, once any problems with the
network or remote server are solved, you can use them to incrementally
sync up the remote pool. You can simply run the remote-sync scripts more
often to do the catch up.
It's just an idea, and everyone has different needs. I often find it handy
to keep different steps in different scripts that are loosely coupled.
>> This allows one to get a quick list of files and directories, then use
>> tar/rsync/cp/etc. to do the actual copy (where the destination does not
>> have to be ZFS: e.g., NFS, ext4, Lustre, HDFS, etc.).
> I know that but I see no reason in reverting to file-based synch if I
> can do block-based.
Sure. I just thought I'd mention it in the thread in case other do need
that functionality and were not aware of "zfs diff". Not everyone does or
can do pool-to-pool backups.
More information about the freebsd-stable