N-way mirror read speedup in zfsonlinux
Alexander Motin
mav at FreeBSD.org
Sun Aug 4 19:02:14 UTC 2013
On 04.08.2013 21:22, Alexander Motin wrote:
> On 04.08.2013 21:18, Steven Hartland wrote:
>> Interesting stuff.
>>
>> I created a little test scenario here today to run this through its
>> passes.
>>
>> Its very basic, running 10 x dd's from 5 * 5GB tests to /dev/null on a
>> pool made up of a 4 SSD's and 1 HDD in a mirror:
>>
>> pool: tpool
>> state: ONLINE
>> scan: resilvered 38.5K in 0h0m with 0 errors on Sun Aug 4 18:13:59
>> 2013
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> tpool ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> ada2 ONLINE 0 0 0
>> ada3 ONLINE 0 0 0
>> ada4 ONLINE 0 0 0
>> ada5 ONLINE 0 0 0
>> ada1 ONLINE 0 0 0
>>
>> The results are quite telling:-
>>
>> == Without Patch ==
>> === SSDs & HD ===
>> Read of 51200MB using bs 1048576 took 51 seconds @ 1003 MB/s
>> Read of 51200 MB using bs 4096 took 51 seconds @ 1003 MB/s
>> Read of 51200 MB using bs 512 took 191 seconds @ 268 MB/s
>>
>> === SSDs Only ===
>> Read of 51200MB using bs 1048576 took 40 seconds @ 1280 MB/s
>> Read of 51200MB using bs 4096 took 41 seconds @ 1248 MB/s
>> Read of 51200MB using bs 512 took 188 seconds @ 272 MB/s
>>
>> == With Patch ==
>> === SSDs & HD ===
>> Read of 51200MB using bs 1048576 took 32 seconds @ 1600 MB/s
>> Read of 51200MB using bs 4096 took 31 seconds @ 1651 MB/s
>> Read of 51200MB using bs 512 took 184 seconds @ 278 MB/s
>>
>> === SSDs Only ===
>> Read of 51200MB using bs 1048576 took 28 seconds @ 1828 MB/s
>> Read of 51200MB using bs 4096 took 29 seconds @ 1765 MB/s
>> Read of 51200MB using bs 512 took 185 seconds @ 276 MB/s
>>
>> Even with only the SSD's the patched version performs
>> noticeably better. I suspect this is down to the fact
>> the SSD's are various makes so have slightly different IO
>> characteristics.
>>
>> N.B. The bs 512 tests can be mostly discounted as it was CPU
>> limited in dd on the 8 core test machine.
>
> Could you also run test with HDDs only and with different (lower) number
> of dd's? SSDs are much more forgiving due to lack of seek time.
I couldn't wait and did it myself with 4xHDDs in mirror:
Without patch:
1xdd 360MB/s
2xdd 434MB/s
4xdd 448MB/s
With patch:
1xdd 167MB/s
2xdd 310MB/s
4xdd 455MB/s
So yes, while it helps with multi-threaded read, sequential low-threaded
read is heavily harmed. I would not call it a win.
>> ----- Original Message ----- From: "Martin Matuska" <mm at FreeBSD.org>
>> To: <zfs-devel at freebsd.org>
>> Cc: "Xin Li" <delphij at FreeBSD.org>; "Steven Hartland" <smh at FreeBSD.org>
>> Sent: Sunday, August 04, 2013 10:25 AM
>> Subject: Re: N-way mirror read speedup in zfsonlinux
>>
>>
>>> Attached is a FreeBSD version of this patch for testing and comments,
>>> including sysctl tunable:
>>> http://people.freebsd.org/~mm/patches/zfs/vdev_mirror.c.patch
>>>
>>> On 2013-07-12 11:21, Martin Matuška wrote:
>>>> Hi everyone,
>>>>
>>>> zfsonlinux has implemented a change in the N-way mirror device
>>>> selection
>>>> algorithm by selecting the device with the least pending I/O instead of
>>>> random selection. They measured an increased read bandwidth increase
>>>> up to
>>>> 50% and IOPS increase up to 10%.
>>>>
>>>> this might be useful for common ZFS code and we might consider porting
>>>> this
>>>> to illumos and FreeBSD:
>>>> https://github.com/zfsonlinux/zfs/issues/1461
>>>> https://github.com/zfsonlinux/zfs/commit/556011dbec2d10579819078559a77630fc559112
>>>>
>
>
--
Alexander Motin
More information about the zfs-devel
mailing list