Drive labelling with ZFS

Frank Leonhardt frank2 at fjl.co.uk
Fri Jul 7 10:47:27 UTC 2017


On 06/14/2017 07:22 AM, Frank Leonhardt wrote:
>> Hi David,
>>
>> It turns out that these options were set anyway. The problem turned
>> out be be that I was assuming that geom label played nice with GPT.
>> It doesn't! Well it does display labels set on GPT partitions, but
>> it doesn't change them. It took a look at the GPT blocks to confirm
>> this. It does, however, mask the GPT version with its own, sometimes,
>> leading to much monkeyhouse.
>>
>> So ignore glabel completely and set the labels using gpart instead.
>>
>> Having got this sorted out, it turns out that it's really not as
>> useful as it sounds. On a new array you can find a broken drive this
>> way, but when it comes to moving a drive around (e.g. from the spare
>> slot to its correct location) life isn't so simple. First off, ZFS
>> does a good job of locating pool components wherever in the array you
>> move them using the GUID. However, if you change the GPT label and
>> move it, ZFS will refer to it by the device name instead. Nothing I
>> have tried will persuade it otherwise. If you leave the label intact
>> it's now pointing to the wrong slot, which ZFS really doesn't mind
>> about but this could really ruin your day if you don't know.
>>
>> Now FreeBSD 11.0 can flash the ident light on any drive you choose,
>> by device name (as used by ZFS), I'm seriously wondering if labels
>> are worth the bother if they can't be relied on. Consider what happen
>> if a tech pulls two drives and puts them back in the wrong order. ZFS
>> will carry on regardless, but the label will now identify the wrong
>> slot. Dangerous!
>>

> I'm glad I was able to provide you with one useful clue.
>
>
> The Lucas books assume a fair amount of reader knowledge and 
> follow-up, but they gave me a nice boost up the learning curve and 
> were worth every penny.  I probably would not have understood glabel 
> vs. gpart without them.
>
>
> The /boot/loader.conf settings are also present on my FreeBSD 11.0 
> system.  The installer must have set them for me.
>
>
> I agree with the idea of having some kind of identifier other than the
> automatically generated interface based device node (e.g. /dev/ada0s1) 
> for devices/ virtual devices.  It sounds like FreeBSD provides 
> multiple choices and the various subsystems are not well coordinated 
> on their usage (?).
>
>
> I am a SOHO user who has only built a few JBOD and RAID0 arrays. But, 
> now I have four 1.5 TB drives and would like to put them to use with 
> FreeBSD ZFS ZRAID1 or striped mirrors.  If you figure out a "one label 
> to rule them all" solution, please post it.  (My preference at this 
> point would be whitespace-free strings set by the administrator based 
> on drive function -- e.g. "zraid1a", "zraid1b", "zraid1c", and 
> "zraid1d", or "zmirror0a", "zmirror0b", "zmirror1a", and "zmirror1b" 
> in my case; I plan to attach matching physical labels on the drives 
> themselves. Failing free-form strings, I prefer make/model/serial 
> number.)

I'm afraid the Lucas book has a lot of stuff in that may have been true 
once. I've had a fun time with the chance to experiment with "big 
hardware" full time for a few weeks, and have some differing views on 
some of it.

With big hardware you can flash the light on any drive you like (using 
FreeBSD sesutil) so the label problem goes away anyhow. With a small 
SATA array I really don't think there's a solution. Basically ZFS will 
cope with having it's drives installed anywhere and stitch them together 
where it finds them. If you accidentally swap a disk around its internal 
label will be wrong. More to the point, if you have to migrate drives to 
another machine, ZFS will be cool but your labels won't be.

The most useful thing I can think of is to label the caddies with the 
GUID (first or last 2-3 digits). If you have only one shelf you should 
be able to find the one you want quick enough.

Incidentally, the Lucas book says you should configure your raidz arrays 
with 2, 4, 8, 16... data drives plus extras depending on the level of 
redundancy. I couldn't see why, so did some digging. The only reason I 
found relates to the "parity" data fitting exactly in to a block, 
assuming specific (small) block sizes to start with. Even if you hit 
this magic combo, using compression is A Good Thing with ZFS so your 
logical:physical mapping is never going to work. So do what you like 
with raidz. With four drives I'd go for raidz2, because I like to have 
more than one spare drive. With 2x2 mirrors you run the risk of killing 
the remaining drive on a pair when the first one dies. It happens more 
often than you think, because resilvering stresses the remaining drive 
and if it's gonna go, that's when (a scientific explanation for sods 
law). That said, mirrors are useful if the drives are separated on 
different shelves. It depends on your level of paranoia, but in a SOHO 
environment there's a tendency to use an array as its own backup.

If you could get a fifth drive raidz2 would be an even better. raidz1 
with four drives is statistically safer than two mirrors as long as you 
swap the failed drive fast. And on that subject, it's good to have a 
spare slot in the array for the replacement drive. Unless the failed 
drive has completely failed, this is much kinder to the remaining drives 
during the resilver.

Regards, Frank.



More information about the freebsd-questions mailing list