ZFS resilvering strangles IO

Michael Gmelin freebsd at grem.de
Tue May 8 22:48:25 UTC 2012


On May 9, 2012, at 00:42, Bob Friesenhahn wrote:

> On Wed, 9 May 2012, Michael Gmelin wrote:
>>>> 
>>>> Setting vfs.zfs.vdev_max_pending="4" in /boot/loader.conf (or whatever
>>>> value you want).  The default is 10.
>> 
>> Do you think this will actually make a difference. As far as I
>> understand my primary problem is not latency but throughput. Simple
>> example is dd if=/dev/zero of=filename bs=1m, which gave me 500kb/s.
>> Latency might be an additional problem (or am I mislead and a shorter
>> queue would raise the processes chance to get data through?).
> 
> The effect may be observed in real-time on a running system.  Latency and throughput go hand in hand.  The 'dd' command is not threaded and is sequential.  It waits for the current I/O to return before it starts the next one.  If the wait is shorter (fewer pending requests in line), then throughput does increase. System total throughput (which includes the resilver operations) may not increase but the throughput observed by an individual waiter may increase.
> 
> The default for vdev_max_pending on Solaris was/is 32.  If FreeBSD uses a default of 10 then reducing from the default may be less dramatic.
> 

That makes sense.

I will run more sophisticated I/O tests next time to get a more
complete picture.

-- 
Michael

> Bob
> -- 
> Bob Friesenhahn
> bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/



More information about the freebsd-fs mailing list