ZFS dedup and replication

Techie techchavez at gmail.com
Mon Nov 28 23:23:49 UTC 2011


Hi all,

Is there any plans to implement sharing of the ZFS DDT Dedup table or
to make ZFS aware of the destination duplicate blocks on a remote
system?

>From how I understand it, the zfs send/recv stream does not know about
the duplicated blocks on the receiving side when using zfs send -D -i
to sendonly incremental changes.

So take for example I have an application that I backup each night to
a ZFS file system. I want to replicate this every night to my remote
site. Each night that I back up I create a tar file on the ZFS data
file system. When I go to send an incremental stream it sends the
entire tar file to the destination even though over 90% of those
blocks already exist at the destination.. Is there any plans to make
ZFS aware of what exists already at the destination site to eliminate
the need to send duplicate blocks over the wire? zfs send -D I believe
only eliminates the duplicate blocks within the stream.

Perhaps I am wrong..


Thanks
Jimmy


More information about the freebsd-fs mailing list