Re: [List] Backup/restore recipe

From: Frank Leonhardt <freebsd-doc_at_fjl.co.uk>
Date: Wed, 12 Nov 2025 16:45:51 UTC
On 12/11/2025 16:00, Eugene R wrote:
> Hello,
>
> Does anyone have any howtos/recipes for optimal backup and restore 
> strategies for a FreeBSD-based server? In particular, a "modern" ZFS 
> installation (pretty complex dataset tree) on a remote cloud system 
> accessible via SSH or console, with some external storage via smbfs or S3.
>
> I suppose we will need
> - partition layout
> - ZFS layout
> - /boot directory
> - /etc directory (including passwd, fstab, etc)
> - filesystem contents (using tar.gz or whatever) and/or
> - ZFS data that can be restored directly
>
> I imagine three potential scenarios:
> - selective restore of specific files or subtrees to a working FreeBSD 
> system (this one is reasonably obvious)
> - (essentially) exact duplicate of the original system state on the 
> same or different machine (ideally binary exact if hardware allows)
> - functionally equivalent duplicate (i.e., the same filesystem content 
> over the potentially different low-level layouts)
> In cases 2 and 3, we likely will have to start from a clean machine, 
> possibly with dummy Linux or FreeBSD installation.
>
> I will be grateful for any pointers or explanations.
>
> Best regards
> Eugene
>
First off, ZFS snapshots are your friend. It's very easy to create a 
cron job script that'll snapshot everything daily (or whatever) and 
rotate them. This allows you to roll back everything, individual 
datasets or just have a look at older files.

Here's a little script I run in a cronjob called "snapshot7days"

-------------------------

#!/bin/sh
for ds in $@
do

zfs destroy -r ${ds}@7daysago
zfs rename -r ${ds}@6daysago @7daysago
zfs rename -r ${ds}@5daysago @6daysago
zfs rename -r ${ds}@4daysago @5daysago
zfs rename -r ${ds}@3daysago @4daysago
zfs rename -r ${ds}@2daysago @3daysago
zfs rename -r ${ds}@yesterday @2daysago
zfs rename -r ${ds}@today @yesterday
zfs snapshot -r ${ds}@today

done

-------------------------

Not exactly complicated.

You run it by passing the datasets you want a snapshot of - e.g. 
"snapshot7days zr/jail/webserver zr/jail/dbserver ..."

The next fun thing is dataset replication - I replicate production 
servers to a duplicate off-site. See "zfs send" and "zfs receive". Send 
write so stdout an receive reads from stdin so you can send them over 
ssh (or if local, use nc for speed). By having a replica of the whole 
pool on another set of drives off-site is a comforting feeling. And the 
best bit is you can do incremental updates (it only transfers the blocks 
that have changed between snapshots).

If you're backing up on a local network to a SMB server, you can just 
pipe the dataset(s) to a large file on that using zfs send. Windoze 
won't know how to read it. If you have encrypted datasets you can 
send/receive using the --raw option and then Windoze users won't even be 
able to dump the file and look at it. By default it decrypts before 
sending. If you do this you'll need to have kept the encryption key 
somewhere safe and restore it with zfs load-key.

I still use tar to dump to tape at file level , just in case ZFS stops 
working.

Speaking of tape, it's very easy to dump a dataset to a remote tape 
drive: zfs send pool/dataset | ssh user@remote "cat >/dev/sa0"

That way if Amazon toasts your VM you have a an air-gapped copy in a 
place ransomware can't touch it.

Regards, Frank.