ZFS vdev I/O questions
Daniel Kalchev
daniel at digsys.bg
Tue Jul 16 13:16:15 UTC 2013
On 16.07.13 14:53, Mark Felder wrote:
> On Tue, Jul 16, 2013 at 02:41:31PM +0300, Daniel Kalchev wrote:
>> I am observing some "strange" behaviour with I/O spread on ZFS vdevs and
>> thought I might ask if someone has observed it too.
>>
> --SNIP--
>
>> Drives da0-da5 were Hitachi Deskstar 7K3000 (Hitachi HDS723030ALA640,
>> firmware MKAOA3B0) -- these are 512 byte sector drives, but da0 has been
>> replaced by Seagate Barracuda 7200.14 (AF) (ST3000DM001-1CH166, firmware
>> CC24) -- this is an 4k sector drive of a new generation (notice the
>> relatively 'old' firmware, that can't be upgraded).
> --SNIP--
>
>> The other observation I have is with the first vdev: the 512b drives do
>> a lot of I/O fast, complete first and then sit idle, while da0 continues
>> to write for many more seconds. They consistently show many more IOPS
>> than the other drives for this type of activity -- on streaming writes
>> all drives behave more or less the same. It is only on this un-dedup
>> scenario where the difference is so much pronounced.
>>
>> All the vdevs in the pool are with ashift=12 so the theory that ZFS
>> actually issues 512b writes to these drives can't be true, can it?
>>
>> Another worry is this Seagate Barracuda 7200.14 (AF)
>> (ST3000DM001-1CH166, firmware CC24) drive. It seems constantly
>> under-performing. Does anyone know if it is so different from the
>> ST3000DM001-9YN166 drives? Might be, I should just replace it?
>>
> A lot of information here.
>
> Those Hitachis are great drives. The addition of the Barracuda with
> different performance characteristics could be part of the problem. I'm
> glad you pointed out that the pool ashift=12 so we can try to rule that
> out. I'd be quite interested in knowing if some or perhaps even all of
> your issues go away simply by replacing that drive with another Hitachi.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
I wanted to commend further on this. The Hitachi drives are only in the
first vdev (da0-da5) together with that new Seagate Barracuda drive.
However, I observe very irregular writing to all the vdevs, not just
within the same vdev.
Here is output of gstat -d with interval 1 second:
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 42 42 600 1.1 0 0 0.0 0 0
0.0 2.5 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 33 33 460 4.3 0 0 0.0 0 0
0.0 3.5 da5
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da6
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da7
0 30 30 656 2.1 0 0 0.0 0 0
0.0 3.0 da8
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da9
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da10
0 34 34 748 1.5 0 0 0.0 0 0
0.0 2.4 da11
0 43 43 1299 1.7 0 0 0.0 0 0
0.0 4.2 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 41 41 1395 1.5 0 0 0.0 0 0
0.0 3.3 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 1081 0 0 0.0 1081 14551 0.7 0 0
0.0 10.3 da20
0 124 0 0 0.0 97 286 0.5 25 273
3.7 1.2 ada0
0 119 0 0 0.0 92 286 0.4 25 273
3.5 1.1 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
24 501 0 0 0.0 501 18421 46.8 0 0
0.0 98.8 da0
24 690 0 0 0.0 690 34208 34.6 0 0
0.0 99.8 da1
24 691 0 0 0.0 691 33317 33.6 0 0 0.0
100.2 da2
24 750 0 0 0.0 750 37752 30.9 0 0
0.0 99.9 da3
24 672 0 0 0.0 672 32694 34.9 0 0 0.0
100.1 da4
24 722 0 0 0.0 722 36178 32.5 0 0 0.0
100.0 da5
24 633 0 0 0.0 633 9046 37.6 0 0 0.0
100.1 da6
24 601 0 0 0.0 601 8727 39.2 0 0 0.0
100.0 da7
24 620 0 0 0.0 620 9198 38.1 0 0 0.0
100.0 da8
24 619 0 0 0.0 619 8915 38.3 0 0 0.0
100.3 da9
24 539 0 0 0.0 539 7692 43.3 0 0 0.0
100.0 da10
24 715 0 0 0.0 715 10221 33.0 0 0 0.0
100.5 da11
24 584 0 0 0.0 584 44525 39.8 0 0
0.0 99.4 da12
24 543 0 0 0.0 543 41081 43.2 0 0 0.0
100.6 da13
24 523 0 0 0.0 523 40641 44.2 0 0 0.0
100.0 da14
24 521 0 0 0.0 521 40509 44.9 0 0
0.0 99.9 da15
24 505 0 0 0.0 505 40206 46.1 0 0
0.0 99.8 da16
24 524 0 0 0.0 524 40677 43.9 0 0
0.0 99.9 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 1082 0 0 0.0 1082 2941 0.2 0 0
0.0 6.5 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.000s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
24 507 0 0 0.0 507 30284 47.7 0 0
0.0 99.9 da0
24 625 0 0 0.0 625 36929 38.7 0 0 0.0
100.6 da1
24 724 0 0 0.0 724 44142 33.2 0 0 0.0
100.5 da2
24 775 0 0 0.0 775 53063 30.6 0 0
0.0 98.2 da3
24 630 0 0 0.0 630 40891 37.8 0 0 0.0
100.2 da4
24 698 0 0 0.0 698 38149 35.4 0 0 0.0
102.5 da5
24 784 0 0 0.0 784 11787 30.7 0 0
0.0 99.9 da6
24 707 0 0 0.0 707 10840 34.3 0 0
0.0 99.1 da7
24 689 0 0 0.0 689 10668 34.9 0 0
0.0 99.6 da8
24 635 0 0 0.0 635 9528 37.8 0 0 0.0
100.1 da9
24 669 0 0 0.0 669 10268 35.6 0 0
0.0 99.7 da10
24 675 0 0 0.0 675 10304 35.2 0 0 0.0
100.3 da11
24 507 0 0 0.0 507 23746 47.4 0 0 0.0
100.0 da12
24 476 0 0 0.0 476 24454 48.9 0 0 0.0
100.0 da13
24 495 0 0 0.0 495 31043 48.2 0 0 0.0
100.8 da14
24 582 0 0 0.0 582 34710 41.3 0 0 0.0
100.1 da15
24 592 0 0 0.0 592 34022 41.1 0 0 0.0
100.4 da16
24 559 0 0 0.0 559 34854 42.5 0 0
0.0 99.6 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.000s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
24 719 0 0 0.0 719 23063 33.0 0 0
0.0 99.2 da0
0 94 0 0 0.0 94 8274 43.6 0 0
0.0 16.0 da1
0 46 0 0 0.0 46 3839 37.0 0 0
0.0 7.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 135 0 0 0.0 135 8966 39.3 0 0
0.0 21.9 da4
0 11 0 0 0.0 11 896 38.8 0 0
0.0 1.4 da5
24 648 0 0 0.0 648 9070 36.6 0 0
0.0 99.9 da6
24 679 0 0 0.0 679 9750 35.7 0 0 0.0
100.1 da7
24 686 0 0 0.0 686 9922 35.0 0 0
0.0 99.9 da8
24 666 0 0 0.0 666 9654 35.8 0 0 0.0
100.6 da9
24 682 0 0 0.0 682 9450 35.1 0 0 0.0
100.4 da10
24 700 0 0 0.0 700 9346 34.1 0 0 0.0
100.0 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.000s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 428 0 0 0.0 428 4207 55.6 0 0 0.0
100.6 da6
24 447 0 0 0.0 447 4279 52.9 0 0 0.0
100.4 da7
24 432 0 0 0.0 432 4087 55.6 0 0 0.0
100.5 da8
24 524 0 0 0.0 524 6243 45.3 0 0
0.0 99.6 da9
24 554 0 0 0.0 554 6379 43.2 0 0 0.0
100.0 da10
24 439 0 0 0.0 439 4611 54.0 0 0
0.0 97.9 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.000s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 350 0 0 0.0 350 3263 69.9 0 0 0.0
102.4 da6
24 326 0 0 0.0 326 3611 74.2 0 0 0.0
100.1 da7
24 335 0 0 0.0 335 3367 72.4 0 0 0.0
100.0 da8
24 329 0 0 0.0 329 2943 73.5 0 0 0.0
100.3 da9
24 326 0 0 0.0 326 2883 75.1 0 0
0.0 99.8 da10
24 369 0 0 0.0 369 2995 65.2 0 0 0.0
100.3 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 525 0 0 0.0 525 8507 45.3 0 0 0.0
100.1 da6
24 430 0 0 0.0 430 6761 55.6 0 0 0.0
101.7 da7
24 479 0 0 0.0 479 7548 50.1 0 0 0.0
100.6 da8
24 542 0 0 0.0 542 9463 44.3 0 0 0.0
100.2 da9
23 593 0 0 0.0 593 10386 40.6 0 0 0.0
100.1 da10
24 555 0 0 0.0 555 9678 42.9 0 0
0.0 98.0 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 566 0 0 0.0 566 9800 41.3 0 0
0.0 98.4 da6
24 526 0 0 0.0 526 12370 46.5 0 0
0.0 99.9 da7
24 577 0 0 0.0 577 13166 41.5 0 0
0.0 99.9 da8
24 538 0 0 0.0 538 11990 44.7 0 0
0.0 99.9 da9
24 631 0 0 0.0 631 12666 37.8 0 0
0.0 99.6 da10
24 650 0 0 0.0 650 12894 36.4 0 0 0.0
101.2 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 365 0 0 0.0 365 3604 66.3 0 0 0.0
100.0 da6
24 361 0 0 0.0 361 3724 65.8 0 0 0.0
100.2 da7
24 363 0 0 0.0 363 3680 65.4 0 0
0.0 99.5 da8
24 342 0 0 0.0 342 3500 69.4 0 0 0.0
100.8 da9
24 355 0 0 0.0 355 3460 70.1 0 0 0.0
101.1 da10
24 373 0 0 0.0 373 3616 65.0 0 0
0.0 99.5 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.000s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 539 0 0 0.0 539 4947 44.2 0 0
0.0 99.8 da6
24 468 0 0 0.0 468 12565 52.0 0 0 0.0
100.2 da7
24 493 0 0 0.0 493 10950 49.5 0 0 0.0
100.0 da8
24 450 0 0 0.0 450 12665 52.8 0 0 0.0
100.2 da9
24 528 0 0 0.0 528 11070 45.7 0 0 0.0
100.0 da10
24 542 0 0 0.0 542 10750 43.9 0 0
0.0 98.0 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 39 0 0 0.0 38 2583 14.6 0 0
0.0 4.9 ada0
0 39 0 0 0.0 38 2583 14.5 0 0
0.0 4.9 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
24 367 0 0 0.0 367 7972 65.1 0 0 0.0
100.0 da6
24 360 0 0 0.0 360 5055 67.4 0 0 0.0
100.1 da7
24 345 0 0 0.0 345 6233 69.4 0 0 0.0
100.2 da8
24 359 0 0 0.0 359 5191 65.9 0 0 0.0
101.3 da9
24 383 0 0 0.0 383 7452 62.2 0 0 0.0
100.2 da10
24 368 0 0 0.0 368 7528 64.7 0 0 0.0
100.9 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da1
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da2
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da3
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da4
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da5
0 181 0 0 0.0 181 6534 68.1 0 0
0.0 50.3 da6
0 379 0 0 0.0 379 14071 57.0 0 0
0.0 90.5 da7
0 229 0 0 0.0 229 8680 65.1 0 0
0.0 64.1 da8
24 400 0 0 0.0 400 13871 60.5 0 0
0.0 99.8 da9
0 193 0 0 0.0 193 6874 57.7 0 0
0.0 45.4 da10
0 222 0 0 0.0 222 7753 68.2 0 0
0.0 65.0 da11
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da12
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da13
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da14
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da15
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da16
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
dT: 1.001s w: 1.000s filter: da[0-9]*$
L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps
ms/d %busy Name
0 332 202 10093 8.8 130 11620 13.3 0 0
0.0 51.2 da0
0 253 121 4381 2.1 132 11620 10.9 0 0
0.0 20.0 da1
0 280 152 4976 4.6 128 11612 11.2 0 0
0.0 28.9 da2
0 376 247 9705 3.9 129 11612 11.3 0 0
0.0 34.6 da3
0 244 115 5540 5.4 129 11628 11.0 0 0
0.0 27.7 da4
0 268 138 5896 7.6 130 11620 10.9 0 0
0.0 38.7 da5
1 467 273 10988 7.9 194 12627 12.2 0 0
0.0 60.4 da6
0 349 147 6583 8.7 202 12659 9.4 0 0
0.0 49.6 da7
5 368 169 6803 7.0 199 12647 10.7 0 0
0.0 48.7 da8
0 451 253 11020 8.4 198 12651 9.7 0 0
0.0 61.1 da9
0 306 104 4493 6.2 202 12647 10.4 0 0
0.0 31.7 da10
0 350 151 4765 5.6 199 12627 10.4 0 0
0.0 40.3 da11
9 366 258 11652 6.9 108 12455 10.3 0 0
0.0 47.1 da12
0 302 194 8126 5.2 108 12455 13.2 0 0
0.0 36.8 da13
0 292 186 8162 3.1 106 12447 12.9 0 0
0.0 30.0 da14
0 370 264 12627 9.7 106 12447 13.3 0 0
0.0 54.1 da15
0 182 72 3110 10.2 110 12459 9.8 0 0
0.0 28.1 da16
0 171 62 3206 8.5 109 12455 9.8 0 0
0.0 25.5 da17
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da18
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 da19
4 744 216 394 0.4 529 1540 0.1 0 0
0.0 4.0 da20
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada0
0 0 0 0 0.0 0 0 0.0 0 0
0.0 0.0 ada1
As you can see, the initial burst is to all vdevs, saturating drives at
100%. Then vdev 3 completes, then the Hitachi drives of vdev 1 complete
with the Seagate drive writing some more and then for few more seconds,
only vdev 2 drives are writing. It seems the amount of data is the same,
just vdev 2 writes the data slower. However, drives in vdev 2 and vdev 3
are the same. They should have the same performance characteristics (and
as long as the drives are not 100% saturated, all vdevs complete more or
less at the same time). At other times, some other vdev would complete
last -- it is never the same vdev that is 'slow'.
Could this be DDT/metadata specific issue? Is the DDT/metadata
vdev-specific? The pool initially had only two vdevs and after vdev 3
was added, most of the written data had no dedup enabled. Also, the ZIL
was added later and initial metadata could be fragmented. But.. why
should this affect writing? The zpool is indeed pretty full, but
performance should degrade for all vdevs (which are more or less equally
full).
Daniel
More information about the freebsd-fs
mailing list