Drive labelling with ZFS

David Christensen dpchrist at holgerdanske.com
Sat Jul 8 02:21:58 UTC 2017


On 07/07/17 03:47, Frank Leonhardt wrote:
> I'm afraid the Lucas book has a lot of stuff in that may have been true
> once. I've had a fun time with the chance to experiment with "big
> hardware" full time for a few weeks, and have some differing views on
> some of it.
>
> With big hardware you can flash the light on any drive you like (using
> FreeBSD sesutil) so the label problem goes away anyhow. With a small
> SATA array I really don't think there's a solution. Basically ZFS will
> cope with having it's drives installed anywhere and stitch them together
> where it finds them. If you accidentally swap a disk around its internal
> label will be wrong. More to the point, if you have to migrate drives to
> another machine, ZFS will be cool but your labels won't be.
>
> The most useful thing I can think of is to label the caddies with the
> GUID (first or last 2-3 digits). If you have only one shelf you should
> be able to find the one you want quick enough.

As I understand it, ZFS goes by the UUID/GUID.  So, using UUID"s for 
software and applying matching physical labels to each drive/caddy makes 
sense.


> Incidentally, the Lucas book says you should configure your raidz arrays
> with 2, 4, 8, 16... data drives plus extras depending on the level of
> redundancy. I couldn't see why, so did some digging. The only reason I
> found relates to the "parity" data fitting exactly in to a block,
> assuming specific (small) block sizes to start with. Even if you hit
> this magic combo, using compression is A Good Thing with ZFS so your
> logical:physical mapping is never going to work. So do what you like
> with raidz. With four drives I'd go for raidz2, because I like to have
> more than one spare drive. With 2x2 mirrors you run the risk of killing
> the remaining drive on a pair when the first one dies. It happens more
> often than you think, because resilvering stresses the remaining drive
> and if it's gonna go, that's when (a scientific explanation for sods
> law). That said, mirrors are useful if the drives are separated on
> different shelves. It depends on your level of paranoia, but in a SOHO
> environment there's a tendency to use an array as its own backup.
>
> If you could get a fifth drive raidz2 would be an even better. raidz1
> with four drives is statistically safer than two mirrors as long as you
> swap the failed drive fast. And on that subject, it's good to have a
> spare slot in the array for the replacement drive. Unless the failed
> drive has completely failed, this is much kinder to the remaining drives
> during the resilver.

Thanks for the information.  :-)


David



More information about the freebsd-questions mailing list