zfs performance degradation

Adam Vande More amvandemore at gmail.com
Thu Sep 24 06:07:34 UTC 2015


On Tue, Sep 22, 2015 at 12:38 PM, Dmitrijs <war at dim.lv> wrote:

> Goog afternoon,
>
>   I've encountered strange ZFS behavior - serious performance degradation
> over few days. Right after setup on fresh ZFS (2 hdd in a mirror) I made a
> test on a file 30Gb size with dd like
> dd if=test.mkv of=/dev/null bs=64k
> and got 150+Mbs speed.
>
> Today I got only 90Mbs, tested with different blocksizes, many times,
> speed seems to be stable +-5%
>

I doubt that.  Block sizes have a large impact on dd read efficiency
regardless of the filesystem.  So unless you were testing the speed of
cached data, there would have been a significant difference between runs of
different block sizes.


>
> nas4free: divx# dd if=test.mkv of=/dev/null bs=64k
> 484486+1 records in
> 484486+1 records out
> 31751303111 bytes transferred in 349.423294 secs (90867734 bytes/sec)
>

Perfectly normal for the parameters you've imposed.  What happens if you
use bs=1m?


> Computer\system details:
>
>  nas4free: /mnt# uname -a
> FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0
> r287260M: Fri Aug 28 18:38:18 CEST 2015 root at dev.nas4free.org:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64
> amd64
>
> RAM 4Gb
> I've got brand new 2x HGST HDN724040ALE640, 4Тб, 7200rpm (ada0, ada1) for
> pool data4.
> Another pool, data2, performs slightly better even on older\cheaper WD
> Green 5400 HDDs, up to 99Mbs.
>

What parameters for both are you using here to make this claim?


>
> While dd is running, gstat is showing like:
>
> dT: 1.002s w: 1.000s
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 366 366 46648 1.1 0 0 0.0 39.6| ada0
> 1 432 432 54841 1.0 0 0 0.0 45.1| ada1
>
>
>
> so iops are very high, while %busy is quite low.


%busy is a misunderstood stat.  Do not use it to evaluate if your drive is
being utilized efficiently.  L(q), ops and seek times are what is
interesting.


> It averages about 50%, rare peaks till 85-90%
>

Basically as close to perfect as you'll ever get considering how you
invoked dd.  ZFS doesn't split sequential reads across a vdev, only a pool
and that's only if multiple vdev's were in the pool when the file was
written.

Your testing methodology is poorly thought and implemented, or at least the
way it was presented to us.  Testing needing to a methodical, repeatable,
testable process accounting for all the variables involved.  All I saw was
a bunch of haphazard and scattered attempts to test sequential read speed
of a ZFS mirror.  Is that really an accurate test of the pool workload?
Did you clear caches between tests?  Why are there other daemons running
like proftpd during the testing? Etc, ad nauseam.



-- 
Adam


More information about the freebsd-questions mailing list