Uneven load on drives in ZFS RAIDZ1
Dan Nelson
dnelson at allantgroup.com
Mon Dec 19 16:40:39 UTC 2011
In the last episode (Dec 19), Stefan Esser said:
> for quite some time I have observed an uneven distribution of load between
> drives in a 4 * 2TB RAIDZ1 pool. The following is an excerpt of a longer
> log of 10 second averages logged with gstat:
>
> dT: 10.001s w: 10.000s filter: ^a?da?.$
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 130 106 4134 4.5 23 1033 5.2 48.8| ada0
> 0 131 111 3784 4.2 19 1007 4.0 47.6| ada1
> 0 90 66 2219 4.5 24 1031 5.1 31.7| ada2
> 1 81 58 2007 4.6 22 1023 2.3 28.1| ada3
[...]
> zpool status -v
> pool: raid1
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> raid1 ONLINE 0 0 0
> raidz1-0 ONLINE 0 0 0
> ada0p2 ONLINE 0 0 0
> ada1p2 ONLINE 0 0 0
> ada2p2 ONLINE 0 0 0
> ada3p2 ONLINE 0 0 0
Any read from your raidz device will hit three disks (the checksum is
applied across the stripe, not on each block, so a full stripe is always
read) so I think your extra IOs are coming from somewhere else.
What's on p1 on these disks? Could that be the cause of your extra I/Os?
Does "zpool iostat -v 10" give you even numbers across all disks?
--
Dan Nelson
dnelson at allantgroup.com
More information about the freebsd-current
mailing list