Uneven load on drives in ZFS RAIDZ1
Stefan Esser
se at freebsd.org
Mon Dec 19 20:42:58 UTC 2011
Am 19.12.2011 17:36, schrieb Michael Reifenberger:
> Hi,
> a quick test using `dd if=/dev/zero of=/test ...` shows:
>
> dT: 10.004s w: 10.000s filter: ^a?da?.$
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 378 0 0 12.5 376 36414 11.9 60.6| ada0
> 0 380 0 0 12.2 378 36501 11.8 60.0| ada1
> 0 382 0 0 7.7 380 36847 11.6 59.2| ada2
> 0 375 0 0 7.4 374 36164 9.6 51.3| ada3
> 0 377 0 1 10.2 375 36325 10.1 53.3| ada4
> 10 391 0 0 39.3 389 38064 15.7 80.2| ada5
Thanks! There are surprising differences (ada5 has a queue length of 10
and much higher latency than the other drives).
> Seems to be sufficiently equally distributed for a life system...
Hmmm, 50%-55% busy on ada3 and ada4 contrasts with 80% busy on ada5.
> zpool status shows:
> ...
> NAME STATE READ WRITE CKSUM
> boot ONLINE 0 0 0
> raidz1-0 ONLINE 0 0 0
> ada0p3 ONLINE 0 0 0
> ada1p3 ONLINE 0 0 0
> ada2p3 ONLINE 0 0 0
> ada3p3 ONLINE 0 0 0
> ada4p3 ONLINE 0 0 0
> ada5p3 ONLINE 0 0 0
> ...
>
> The only cases I've seen (and expected to see) unequal load
> distributions on ZFS was after extending a nearly full four disk mirror
> pool by additional two disks.
In my case the pool was created from disk drives with nearly identical
serial numbers in its current configuration. Some of the drives have a
few more power-on hours, since I performed some tests with them, before
moving all data from the old pool the new one, but else everything
should be symmetric.
Best regards, STefan
More information about the freebsd-current
mailing list