ZFS and glabel

Daniel Kalchev daniel at digsys.bg
Thu Nov 20 10:46:23 UTC 2014

Hi Ivailo,

The FreeBSD glabel is in bit of a mess, indeed. It is a mess not because
the tech is bad or buggy (although there are caveats), but because the
glabel tool had made it all too confusing by displaying them all
together. Or perhaps our assumptions are wrong if one needs to be more

We have several kinds of labels. Each of them lives under it's own
namespace (a subdir of /dev). There are the glabel type, which you
manipulate with glabel - this lives under /dev/label. There are the geom
labels that you manipulate via gpart that live under /dev/gpt. There are
the gmirror labels that you manipulate with gmirror and live under
/dev/mirror. There are the disk ID labels that live under /dev/diskid.
There are the UFS labels that you manipulate with newfs/tunefs that live
under /dev/ufs. Perhaps there are others I missed..

Then comes ZFS. For it's own sanity, ZFS would label the devices it's
given with it's own labels -- so that when you reboot or move the pool
to another machine it still finds it's members and structure.

If it can find its own labels, that is...

As a consequence of this, the safest way to use ZFS is with whole
devices. This pretty much guarantees your ZFS pool will be portable
across any system and ZFS will *always* be able to find it, no matter
what. The drawback is you might not know for sure which device id is
which physical drive, because many factors might influence device name
reordering. But this is pretty much the only drawback.

The diskid should work in a similar way. On systems that don't have disk
ids, you will fall back to the device name, so no big deal.

The next "safest" thing is the GPT label, which you create with gpart.
Many systems (non FreeBSD) support it and your pool will be just fine there.

Worst are glabel and gmirror, mostly because they have trouble being
nested. But as long as you stick to some simple rules, these work ok too.

What you are seeing is when you destroy the label, ZFS can no longer
find it's own labels. This is because when you destroy the label ZFS has
no idea w where to look for it -- what the offset would be.

If in your example, you recreate the label again, that pool will
suddenly work again -- even if you use different name for the new label
-- the ZFS's own label will be then discoverable again.

I myself prefer either raw disks or GPT. The later especially in smaller
systems, where I would use GPT for boot partitions anyway.
But also on systems with tens of drives, where I need to know the
physical location of the drive (and not care much about it's serial
number at that moment, which would be the case of using diskid labels).
On these systems, I would label the GPT partition with chasis/position name.

By the way, I still have few systems that use glabels (dev/label).


On 17.11.14 14:42, Ivailo A. Tanusheff wrote:
> Dear all,
> I run to an interesting issue and I would like to discuss it with all of you.
> The whole thing began with me trying to identify available HDD to include in a zfs pool through a script/program. 
> I assumed that the easiest way of doing this is using glabel. For example:
> root at FreeBSD:~ # glabel status
>                                       Name  Status  Components
> gptid/248e758c-e267-11e3-95bb-08002796202b     N/A  ada0p1
>            diskid/DISK-VBdd471206-91164057     N/A  ada5
>            diskid/DISK-VBe98b5e75-0d8cf6dc     N/A  ada8
>            diskid/DISK-VB7d006584-01beca12     N/A  ada6
>            diskid/DISK-VB721029c3-66a60156     N/A  ada7
>            diskid/DISK-VB31481dbb-639540a1     N/A  ada2
>            diskid/DISK-VB95921208-4eb19f41     N/A  ada4
> So far it is OK and if I create pool like zpool create xxx ada4 then the line for ada4 will disappear from the glabel status.
> As far as I remember though it is not recommended to use production pools based on the device naming, so I wanted to switch to gpt lable, i.e.  diskid/DISK-VB95921208-4eb19f41.
> When I recreate pool like:
> zpool create xxx           diskid/DISK-VB95921208-4eb19f41    the pool is created without problems, but the device does not disappear from the glabel status list, thus making my program running wrong.
> Is this a problem with the zfs implementation, my server or the general idea is wrong?
> BTW, if I label the disk additionally, like: 
> glabel create VB95921208-4eb19f41 ada4
> zpool create xxx label/VB95921208-4eb19f41
> The glabel status again shows the right information. The problem with the latest approach is that if someone executes:
> glabel destroy -f VB95921208-4eb19f41
> The result becomes:
> pool: xxx
>  state: UNAVAIL
> status: One or more devices are faulted in response to IO failures.
> action: Make sure the affected devices are connected, then run 'zpool clear'.
>    see: http://illumos.org/msg/ZFS-8000-HC
>   scan: none requested
> config:
>         NAME                   STATE     READ WRITE CKSUM
>         xxx                    UNAVAIL      0     0     0
>           6968348230421469155  REMOVED      0     0     0  was /dev/label/VB95921208-4eb19f41
> And the data is practically unrecoverable.
> So my questions are:
> - Is there a way to make glabel to show the right data when I use diskid/DISK-VB95921208-4eb19f41     
> - Which is the most proper way of creating vdevs - with disk name (ada4), diskid (diskid/DISK-VB95921208-4eb19f41) or manual labeling? 
> - How may I found which disks are free, if the diskid approach is the best solution?
> Regards,
> Ivailo Tanusheff
> Disclaimer:
> This communication is confidential. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or taking any action in reliance on the contents of this information is strictly prohibited and may be unlawful. If you have received this communication by mistake, please notify us immediately by responding to this email and then delete it from your system.
> Eurobank Bulgaria AD is not responsible for, nor endorses, any opinion, recommendation, conclusion, solicitation, offer or agreement or any information contained in this communication.
> Eurobank Bulgaria AD cannot accept any responsibility for the accuracy or completeness of this message as it has been transmitted over a public network. If you suspect that the message may have been intercepted or amended, please call the sender.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"

More information about the freebsd-fs mailing list