ZFS resilvering strangles IO

Bob Friesenhahn bfriesen at simple.dallas.tx.us
Tue May 8 22:42:15 UTC 2012


On Wed, 9 May 2012, Michael Gmelin wrote:
>>>
>>> Setting vfs.zfs.vdev_max_pending="4" in /boot/loader.conf (or whatever
>>> value you want).  The default is 10.
>
> Do you think this will actually make a difference. As far as I
> understand my primary problem is not latency but throughput. Simple
> example is dd if=/dev/zero of=filename bs=1m, which gave me 500kb/s.
> Latency might be an additional problem (or am I mislead and a shorter
> queue would raise the processes chance to get data through?).

The effect may be observed in real-time on a running system.  Latency 
and throughput go hand in hand.  The 'dd' command is not threaded and 
is sequential.  It waits for the current I/O to return before it 
starts the next one.  If the wait is shorter (fewer pending requests 
in line), then throughput does increase. System total throughput 
(which includes the resilver operations) may not increase but the 
throughput observed by an individual waiter may increase.

The default for vdev_max_pending on Solaris was/is 32.  If FreeBSD 
uses a default of 10 then reducing from the default may be less 
dramatic.

Bob
-- 
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/


More information about the freebsd-fs mailing list