Re: ZFS on high-latency devices

From: Peter Jeremy <peter_at_rulingia.com>
Date: Sat, 28 Aug 2021 13:37:47 +1000
On 2021-Aug-20 00:16:28 +0100, Johannes Totz <johannes_at_jo-t.de> wrote:
>Do you have geli included in those perf tests? Any difference if you 
>leave it out?

Yes, I mentioned geli (and I also have IPSEC, which I forgot to
mention).  I haven't tried taking them out but dd(1) tests suggest
they aren't a problem.

>What's making the throughout slow? zfs issuing a bunch of small writes 
>and then trying to read something (unrelated)? Is there just not enough 
>data to be written to saturate the link?

At least from eyeballing gstat, there are basically no reads involved
in the zfs recv. The problem seems to be that writes aren't evenly
spread across all the vdevs, combined with very long delays associated
with flushing snapshots.  I have considered instrumenting ggate{c,d}
to see if I can identify any issues.

>Totally random thought: there used to be a vdev cache (not sure if 
>that's still around) that would inflate read requests to hopefully drag 
>in more data that might be useful soon.

ZFS includes that functionality itself.

>Have you tried hastd?

I haven't but hastd also uses GEOM_GATE so I wouldn't expect
significantly different behaviour.

-- 
Peter Jeremy

Received on Sat Aug 28 2021 - 03:37:47 UTC

Original text of this message