vdevs with different spindle speeds

Charles Sprickman spork at bway.net
Sat Dec 5 17:22:22 UTC 2020

> On Dec 5, 2020, at 8:24 AM, Mel Pilgrim <list_freebsd at bluerosetech.com> wrote:
> On 2020-12-05 4:38, tech-lists wrote:
>> Normally when making an array, I'd like to use all disks all same speed,
>> interface, make and model but from different batches. In this case, I've no
>> choice, so we have multiple 1TB disks some 7.2k some 5.4k. I've not mixed
>> them like this before.
>>                                                                                                                                 What effect would this have on the final array? Slower than if all one or the other?
>> No effect? I'm expecting the fastest access will be that of the slowest vdev.
>>                                                                                                                                 Similarly some disks block size is 512b logical/512b physical, others are 512b
>> logical/4096 physical, still others are 4096/4096. Any effect of
>> mixing hardware? I understand zfs sets its own blocksize.
> Make sure you have ashift=12 for everything and you'll be fine.  The marginal increase in latency with the 5400 rpm drives will disappear behind ZFS' heavily-cached, asynchronous operation unless you're hammering the pool with calls for cold data.

This is interesting, I always considered mixing a “no-no”, probably due to being told this in the old days of hardware raid with minimal/dumb caching.

I was considering something similar for some cheap servers that are not terribly critical, but in my case, mixing SSDs. For example, one standard Samsung EVO and then one of the cheaper Intel datacenter-grade drives (generally about twice the cost of a standard SSD) in a mirror. I figured even if I put them in service at the same time, the first failure should be staggered. But I was not really clear on what affect this would have or if I’d be confusing zfs with this mix of drives…

Any thoughts on this?


> If you're really worried about it, get a cheap SSD and use it as a cache device.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"

More information about the freebsd-fs mailing list