OpenZFS dRAID questions
Freddie Cash
fjwcash at gmail.com
Thu Jan 28 23:53:51 UTC 2021
[Not sure which list is more applicable for this question, so sending to
-fs and -stable. If it should be only one or the other, let me know.]
Trying to get an understanding of how the dRAID vdev support works in ZFS,
and what a good setup would be for a storage server using multiple JBODs
full of SATA drives.
Right now, my storage pools are made up of 6-disk raidz2 vdevs. So my
24-bay systems have 4x raidz2 vdevs, my 45-bay systems have 7x raidz2 vdevs
with 3 (cold) spares, and my 90-bay systems have 15x raidz2 vdevs (1 vdev
uses the 3 extra drives from each 45-drive JBOD).
If I'm reading the dRAID docs correctly, instead of having multiple raidz2
vdevs in the pool, I'd have a single draid vdev, configured with children
that are configured with similar data/parity devices to "mimic" raidz2?
If that's correct, would it make sense to have a single draid vdev per pool
(splitting the draid vdev across JBODs)? Or a single draid vdev per JBOD
chassis (so 2x draid vdevs)?
What's the practical limit for the number of drives in a single draid vdev?
I have a brand new storage server sitting in a box with a 44-bay JBOD that
will be going into the server room next week, and I'm tempted to try draid
on it instead of the multiple-raidz2 setup. This would use 4+2
(data+parity) children with 2 spares, I believe.
This server will be replacing a 90-bay system (2x JBODs), which will then
be used as another storage server once all the dying drives are replaced.
Will be interesting to see how draid works on that one as well, but not
sure how to configure it.
--
Freddie Cash
fjwcash at gmail.com
More information about the freebsd-stable
mailing list