ZFS dedup and replication

krad kraduk at gmail.com
Thu Dec 1 10:44:17 UTC 2011


On 28 November 2011 23:01, Techie <techchavez at gmail.com> wrote:

> Hi all,
>
> Is there any plans to implement sharing of the ZFS DDT Dedup table or
> to make ZFS aware of the destination duplicate blocks on a remote
> system?
>
> >From how I understand it, the zfs send/recv stream does not know about
> the duplicated blocks on the receiving side when using zfs send -D -i
> to sendonly incremental changes.
>
> So take for example I have an application that I backup each night to
> a ZFS file system. I want to replicate this every night to my remote
> site. Each night that I back up I create a tar file on the ZFS data
> file system. When I go to send an incremental stream it sends the
> entire tar file to the destination even though over 90% of those
> blocks already exist at the destination.. Is there any plans to make
> ZFS aware of what exists already at the destination site to eliminate
> the need to send duplicate blocks over the wire? zfs send -D I believe
> only eliminates the duplicate blocks within the stream.
>
> Perhaps I am wrong..
>
>
> Thanks
> Jimmy
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


Why tar up the stuff? Just do a zfs snap and then you bypass the whole
issue?


More information about the freebsd-fs mailing list