Re: ZFS on high-latency devices

From: Peter Jeremy <>
Date: Sat, 28 Aug 2021 12:03:05 +1000
On 2021-Aug-22 17:48:13 -0600, Alan Somers <> wrote:
>mbuffer is not going to help the OP.

I agree that mbuffer won't help.  I already use something equivalent to
remove the read latency on the send side.

>And if I understand correctly, he's
>connecting over a WAN, not a LAN.  ZFS will never achieve decent
>performance in such a setup.  It's designed as a local file system, and
>assumes it can quickly read metadata off of the disks at any time.

Yes.  But, at least with a relatively empty destination, zfs actually does
almost no reads whilst doing a recv.  As far as I can tell, the problem is
that zfs does a complete flush of all data and metadata at snapshot
boundaries.  This is painful even with local filesystems (it typically
takes >1s to recv an empty snapshot with local disks).

>OP's best option is to go with "a": encrypt each dataset and send them with
>"zfs send --raw".  I don't know why he thinks that it would be "very
>difficult".  It's quite easy, if he doesn't care about old snapshots.  Just:

I agree that "zfs send --raw" is the best solution to network RTT and
I agree that migrating to ZFS native encryption is quite easy if you
don't care about any ZFS features.  However, I do care about old
snapshots - migrating to ZFS native encryption is a non-starter if it
involves throwing away all my old snapshots and clones.

I have also been working on migrating to native encryption.  I know how to
migrate snapshots and think there a way to migrate clones (but I need to
validate it).  The remaining definite blocker is working out how to migrate
the pool root filesystem (including snapshots).
Peter Jeremy

Received on Sat Aug 28 2021 - 02:03:05 UTC

Original text of this message