Anyone used rsync scriptology for incremental backup?

Freddie Cash fjwcash at gmail.com
Thu Oct 30 09:04:15 PDT 2008


On October 30, 2008 08:18 am Nikolay Denev wrote:
> On 30 Oct, 2008, at 17:04 , Freddie Cash wrote:
> > On October 30, 2008 01:25 am Nikolay Denev wrote:
> >> On 30 Oct, 2008, at 07:00 , Freddie Cash wrote:
> >>> On Thu, Oct 30, 2008 at 1:50 AM, Andrew Snow <andrew at modulus.org>
> >>>
> >>> wrote:
> >>>> In this way, each day we generate a batch file that lets us step
> >>>> back one
> >>>> day.  The diffs themselves, compressed with gzip, and extremely
> >>>> space efficient.  We can step back potentially hundreds of days,
> >>>> though it seems
> >>>> to throw errors sometimes when backing up Windows boxes, which I
> >>>> haven't
> >>>> tracked down yet.
> >>>>
> >>>> But to be honest, soon you can save yourself a lot of hassle by
> >>>> simply using
> >>>> ZFS and taking snapshots.  It'll be faster, and with compression
> >>>> very space
> >>>> efficient.
> >>>
> >>> That's exactly what we do, use ZFS and RSync.  We have a ZFS
> >>> /storage/backup filesystem, with directories for each remote site,
> >>> and sub-directories for each server to be backed up.
> >>>
> >>> Each night we snapshot the directory, then run rsync to backup each
> >>> server.  Snapshots are named with the current date.  For 80 FreeBSD
> >>> and Linux servers, we average 10 GB of changed data a night.
> >>>
> >>> No muss, no fuss.  We've used it to restore entire servers (boot
> >>> off Knoppix/Frenzy CD, format partitions, rsync back), individual
> >>> files (no mounting required, just cd into the
> >>> .zfs/snapshot/snapshotname directory and scp the file), and even
> >>> once to restore the
> >>> permissions
> >>> on a pair of servers where a clueless admin "chmod -R user /home"
> >>> and
> >>> "chmod -R 777 /home".
> >>>
> >>> Our backup script is pretty much just a double-for loop that scans
> >>> a set of site-name directories for server config files, and runs
> >>> rsync in parallel (1 per remote site).
> >>>
> >>> We we looking into using variations on rsnapshot, custom
> >>> squashfs/hardlink stuff, and other solutions, but once we started
> >>> using ZFS, we stopped looking down those roads.  We were able to do
> >>> in 3 days of testing and scripting what we hadn't been able to do
> >>> in almost a month of research and testing.
> >>
> >> Do you experience problems with the snapshots?
> >> Last time I tried something similiar for backups the bachine
> >> began to spit errors after a few days of snapshots.
> >>
> >> http://lists.freebsd.org/pipermail/freebsd-fs/2008-February/004413.h
> >>tml
> >
> > We have 72 daily snapshots so far.  Have had up to 30 of them mounted
> > read-only while looking for the right version of a file to restore.
> >
> > These are ZFS snapshots, very different from UFS snapshots.
> >
> > --
> > Freddie Cash
> > fjwcash at gmail.com
>
> Yes,
>
> Mine were zfs snapshots too, and I've never managed to create more
> than a
> few days worth of snapshots before the machine start to print "bad
> file descriptor" errors
> while trying to access the snapshot directory.
> But I guess (hope) this problem does not exist anymore when you are
> able to do 72 snapshots.

Well, hopefully we're not just lucky.  :)

We're running 64-bit FreeBSD 7-STABLE from August (after the first round 
of ZFS patches hit the -STABLE tree).  Took about 3 weeks-ish to get the 
kernel and ARC tuning set right.  Since then, it's been smooth sailing.

For posterity sake:

uname -a:
FreeBSD megadrive.sd73.bc.ca 7.0-STABLE FreeBSD 7.0-STABLE #0: Tue Aug 19 
10:39:29 PDT 2008     
root at megadrive.sd73.bc.ca:/usr/obj/usr/src/sys/ZFSHOST  amd64

/boot/loader.conf:
zfs_load="YES"
hw.ata.ata_dma=0
kern.hz="100"
vfs.zfs.arc_min="512M"
vfs.zfs.arc_max="768M"
vfs.zfs.prefetch_disable="1"
vfs.zfs.zil_disable="0"
vm.kmem_size="1596M"
vm.kmem_size_max="1596M"

The ata_dma=0 is needed as / is a gmirror of two 2 GB CompactFlash cards 
attached to IDE adapters, and they don't support DMA.

And the zpool is a raidz2 of 12x400 GB SATA drives connected to a 3Ware 
9550SXU-16ML, and 12x500 GB SATA drives connected to a 3Ware 9650SE-12ML, 
all configured as SingleDisk arrays (so 24 daXX devices).  There's just 
over 9 TB of usable space in the zpool.

zfs list -t filesystem:
NAME                          USED  AVAIL  REFER  MOUNTPOINT
storage                      3.32T  4.43T   117K  /storage
storage/Backup186             697M  4.43T   697M  /storage/Backup186
storage/backup               3.25T  4.43T  2.21T  /storage/backup
storage/home                 19.9G  4.43T  19.9G  /home
storage/tmp                  27.2M  4.43T  27.2M  /tmp
storage/usr                  2.71G  4.45T   223M  /usr
storage/usr/local             128M  4.45T   128M  /usr/local
storage/usr/obj              1.63G  4.45T  1.63G  /usr/obj
storage/usr/ports             540M  4.45T   449M  /usr/ports
storage/usr/ports/distfiles  91.2M  4.45T  91.2M  /usr/ports/distfiles
storage/usr/src               210M  4.45T   210M  /usr/src
storage/var                  1.75G  4.45T  1.75G  /var

zfs list -t snapshot | wc -l
      74

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-stable mailing list