ZFS vdev I/O questions

Ivailo Tanusheff Ivailo.Tanusheff at skrill.com
Tue Jul 16 14:09:42 UTC 2013


Hi danbo :)

Isn't this some kind of pool fragmentation? Because this is usually the case in such slow parts of the disk systems. I think your pool is getting full and it is heavily fragmented, that's why you have more data for each request on a different vdev.
But this has nothing to do with the single, slow device :(

Best regards,
Ivailo Tanusheff

-----Original Message-----
From: owner-freebsd-fs at freebsd.org [mailto:owner-freebsd-fs at freebsd.org] On Behalf Of Daniel Kalchev
Sent: Tuesday, July 16, 2013 4:16 PM
To: freebsd-fs at freebsd.org
Subject: Re: ZFS vdev I/O questions


On 16.07.13 14:53, Mark Felder wrote:
> On Tue, Jul 16, 2013 at 02:41:31PM +0300, Daniel Kalchev wrote:
>> I am observing some "strange" behaviour with I/O spread on ZFS vdevs 
>> and thought I might ask if someone has observed it too.
>>
> --SNIP--
>
>> Drives da0-da5 were Hitachi Deskstar 7K3000 (Hitachi HDS723030ALA640, 
>> firmware MKAOA3B0) -- these are 512 byte sector drives, but da0 has 
>> been replaced by Seagate Barracuda 7200.14 (AF) (ST3000DM001-1CH166, 
>> firmware
>> CC24) -- this is an 4k sector drive of a new generation (notice the 
>> relatively 'old' firmware, that can't be upgraded).
> --SNIP--
>

As you can see, the initial burst is to all vdevs, saturating drives at 100%. Then vdev 3 completes, then the Hitachi drives of vdev 1 complete with the Seagate drive writing some more and then for few more seconds, only vdev 2 drives are writing. It seems the amount of data is the same, just vdev 2 writes the data slower. However, drives in vdev 2 and vdev 3 are the same. They should have the same performance characteristics (and as long as the drives are not 100% saturated, all vdevs complete more or less at the same time). At other times, some other vdev would complete last -- it is never the same vdev that is 'slow'.

Could this be DDT/metadata specific issue? Is the DDT/metadata vdev-specific? The pool initially had only two vdevs and after vdev 3 was added, most of the written data had no dedup enabled. Also, the ZIL was added later and initial metadata could be fragmented. But.. why should this affect writing? The zpool is indeed pretty full, but performance should degrade for all vdevs (which are more or less equally full).

Daniel
_______________________________________________
freebsd-fs at freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"




More information about the freebsd-fs mailing list