effect of differing spindle speeds on prospective zfs vdevs
Paul Mather
paul at gromit.dlib.vt.edu
Sat Dec 5 13:51:16 UTC 2020
On Fri, 4 Dec 2020 23:43:15 +0000, tech-lists <tech-lists at zyxst.net> wrote:
> Normally when making an array, I'd like to use all disks all same speed,
> interface, make and model but from different batches. In this case, I've no
> choice, so we have multiple 1TB disks some 7.2k some 5.4k. I've not mixed
> them like this before.
>
> What effect would this have on the final array? Slower than if all one or the other?
> No effect? I'm expecting the fastest access will be that of the slowest vdev.
I believe you are correct in intuiting that the performance of the pool will be influenced by the slowest devices.
ZFS supports a variety of pool organisations, each with differing I/O characteristics, so "making an array" could cover a multiplicity of possibilities. I.e., a "JBOD" pool would have different I/O characteristics than a RAIDZ pool. Read access would also be different than write access, and so the use case of the pool (read-intensive or write-intensive) would I/O speeds. (And, furthermore, small random vs. large sequential I/O will have an impact.)
IIRC, write IOPS of RAIDZ pools are limited to the IOPS of the slowest device.
> Similarly some disks block size is 512b logical/512b physical, others are
> 512b logical/4096 physical, still others are 4096/4096. Any effect of
> mixing hardware? I understand sfs sets its own blocksize.
IIRC, ZFS pools have a single ashift for the entire pool, so you should set it to accommodate the 4096/4096 devices to avoid performance degradation. I believe it defaults to that now, and should auto-detect anyway. But, in a mixed setup of vdevs like you have, you should be using ashift=12.
I believe having an ashift=9 on your mixed-drive setup would have the biggest performance impact in terms of reducing performance.
Cheers,
Paul.
More information about the freebsd-questions
mailing list