8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

Dan Naumov dan.naumov at gmail.com
Sun Jan 24 18:29:54 UTC 2010


On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
<bfriesen at simple.dallas.tx.us> wrote:
> On Sun, 24 Jan 2010, Dan Naumov wrote:
>>
>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>> bonnie results. It also sadly seems to confirm the very slow speed :(
>> The disks are attached to a 4-port Sil3124 controller and again, my
>> Windows benchmarks showing 65mb/s+ were done on exact same machine,
>> with same disks attached to the same controller. Only difference was
>> that in Windows the disks weren't in a mirror configuration but were
>> tested individually. I do understand that a mirror setup offers
>> roughly the same write speed as individual disk, while the read speed
>> usually varies from "equal to individual disk speed" to "nearly the
>> throughput of both disks combined" depending on the implementation,
>> but there is no obvious reason I am seeing why my setup offers both
>> read and write speeds roughly 1/3 to 1/2 of what the individual disks
>> are capable of. Dmesg shows:
>
> There is a mistatement in the above in that a "mirror setup offers roughly
> the same write speed as individual disk".  It is possible for a mirror setup
> to offer a similar write speed to an individual disk, but it is also quite
> possible to get 1/2 (or even 1/3) the speed. ZFS writes to a mirror pair
> requires two independent writes.  If these writes go down independent I/O
> paths, then there is hardly any overhead from the 2nd write.  If the writes
> go through a bandwidth-limited shared path then they will contend for that
> bandwidth and you will see much less write performance.
>
> As a simple test, you can temporarily remove the mirror device from the pool
> and see if the write performance dramatically improves. Before doing that,
> it is useful to see the output of 'iostat -x 30' while under heavy write
> load to see if one device shows a much higher svc_t value than the other.

Ow, ow, WHOA:

atombsd# zpool offline tank ad8s1a

[jago at atombsd ~]$ dd if=/dev/zero of=/home/jago/test3 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 16.826016 secs (63814382 bytes/sec)

Offlining one half of the mirror bumps DD write speed from 28mb/s to
64mb/s! Let's see how Bonnie results change:

Mirror with both parts attached:

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3  2.1

Mirror with 1 half offline:

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         1024 22888 58.0 41832 35.1 22764 22.0 26775 52.3 54233 18.3 166.0  1.6

Ok, the Bonnie results have improved, but only very little.

- Sincerely,
Dan Naumov


More information about the freebsd-questions mailing list