Anyone used rsync scriptology for incremental backup?

Nikolay Denev ndenev at gmail.com
Thu Oct 30 08:18:07 PDT 2008


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


On 30 Oct, 2008, at 17:04 , Freddie Cash wrote:

> On October 30, 2008 01:25 am Nikolay Denev wrote:
>> On 30 Oct, 2008, at 07:00 , Freddie Cash wrote:
>>> On Thu, Oct 30, 2008 at 1:50 AM, Andrew Snow <andrew at modulus.org>
>>>
>>> wrote:
>>>> In this way, each day we generate a batch file that lets us step
>>>> back one
>>>> day.  The diffs themselves, compressed with gzip, and extremely
>>>> space efficient.  We can step back potentially hundreds of days,
>>>> though it seems
>>>> to throw errors sometimes when backing up Windows boxes, which I
>>>> haven't
>>>> tracked down yet.
>>>>
>>>> But to be honest, soon you can save yourself a lot of hassle by
>>>> simply using
>>>> ZFS and taking snapshots.  It'll be faster, and with compression
>>>> very space
>>>> efficient.
>>>
>>> That's exactly what we do, use ZFS and RSync.  We have a ZFS
>>> /storage/backup filesystem, with directories for each remote site,
>>> and sub-directories for each server to be backed up.
>>>
>>> Each night we snapshot the directory, then run rsync to backup each
>>> server.  Snapshots are named with the current date.  For 80 FreeBSD
>>> and Linux servers, we average 10 GB of changed data a night.
>>>
>>> No muss, no fuss.  We've used it to restore entire servers (boot off
>>> Knoppix/Frenzy CD, format partitions, rsync back), individual files
>>> (no mounting required, just cd into the .zfs/snapshot/snapshotname
>>> directory and scp the file), and even once to restore the  
>>> permissions
>>> on a pair of servers where a clueless admin "chmod -R user /home"  
>>> and
>>> "chmod -R 777 /home".
>>>
>>> Our backup script is pretty much just a double-for loop that scans a
>>> set of site-name directories for server config files, and runs rsync
>>> in parallel (1 per remote site).
>>>
>>> We we looking into using variations on rsnapshot, custom
>>> squashfs/hardlink stuff, and other solutions, but once we started
>>> using ZFS, we stopped looking down those roads.  We were able to do
>>> in 3 days of testing and scripting what we hadn't been able to do in
>>> almost a month of research and testing.
>
>> Do you experience problems with the snapshots?
>> Last time I tried something similiar for backups the bachine
>> began to spit errors after a few days of snapshots.
>>
>> http://lists.freebsd.org/pipermail/freebsd-fs/2008-February/004413.html
>
> We have 72 daily snapshots so far.  Have had up to 30 of them mounted
> read-only while looking for the right version of a file to restore.
>
> These are ZFS snapshots, very different from UFS snapshots.
>
> -- 
> Freddie Cash
> fjwcash at gmail.com

Yes,

Mine were zfs snapshots too, and I've never managed to create more  
than a
few days worth of snapshots before the machine start to print "bad  
file descriptor" errors
while trying to access the snapshot directory.
But I guess (hope) this problem does not exist anymore when you are  
able to do 72 snapshots.


- --
Regards,
Nikolay Denev




-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (Darwin)

iEYEARECAAYFAkkJ0CgACgkQHNAJ/fLbfrn9pACfSVFPyiHDosaK6FdOdfgo8onL
Ia4An1qUoSnOq/yjIGC5fMngT+PPkEKk
=bWqT
-----END PGP SIGNATURE-----


More information about the freebsd-stable mailing list