[Bug 237807] ZFS: ZVOL writes fast, ZVOL reads abysmal...

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Sat May 11 02:59:12 UTC 2019


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237807

sigsys at gmail.com changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |sigsys at gmail.com

--- Comment #2 from sigsys at gmail.com ---
These are random reads right?

How are you benchmarking it?  Is it over iSCSI?

If the benchmark program sends its read requests one after the other (i.e., it
waits before a read returns before sending the next), then it's effectively
waiting on one random read being dispatched to one disk every time.  So it's
the latency of disks that is being the limiting factor and there isn't much of
anything that can be done to reduce that latency (apart from caching).

And I think that the way that gstat measures %busy is that it gives you the
fraction of time for which there was at least one operation in-flight.  So
because the zvol is almost always waiting on at least one disk, it is nearly
100% busy when measured this way.  I'm guessing it just wasn't designed to
measure the performance of devices that could potentially serve a lot requests
concurrently.  I'm not certain though.

The way to speed this up is for the zvol to receive concurrent read requests. 
This way they can be dispatched to multiple disks concurrently.

If you can do that, running multiple benchmarks at the same time on the zvol
should show a much higher total throughput.  Or maybe you can tell the
benchmark program to use multiple threads/processes or use AIO.

Maybe it's trying to send concurrent reads, but the problem is that somewhere
along the way, requests get serialized.  Say you use iSCSI or a VM, then the
benchmark program must have a way to tell the client kernel to issue concurrent
requests (if that IO is done through a FS, then that FS must have good support
for issuing IO concurrently), then this must be translated to concurrent
requests over iSCSI (through TCQ) or whatever VM<->Host protocol.  And then
this must be translated to concurrent requests over the zvol (which might not
happen in all cases depending on how that zvol is being interfaced with, I'm
not sure).  Even if all that works well you probably won't get the full
throughput that you would get locally but it should still be much better than
the performance of a single disk.

In any case, your pool should be able to serve way more requests than that in
total, but when it's directed to a single zvol there has to be a way to submit
them concurrently.

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-fs mailing list