Anyway to change pool to use the gpt label instead of gptid?

Freddie Cash fjwcash at gmail.com
Mon Oct 24 05:58:54 UTC 2011


On Oct 23, 2011 6:46 PM, "Jeremy Chadwick" <freebsd at jdc.parodius.com> wrote:
>
> On Sun, Oct 23, 2011 at 08:24:54PM -0500, Larry Rosenman wrote:
> > On Sun, 23 Oct 2011, Jeremy Chadwick wrote:
> > >Aren't GPT labels stored in the /dev/gpt directory structure?
> > Nope, they're eaten:
> >
> > $ ls /dev/gpt
> > swap0
> > swap1
> > swap2
> > swap3
> > swap4
> > swap5
> > $
>
> This looks like a bug or design oddity in GEOM.  Based on your setup you
> should have swap[0-5] and disk[0-5] in /dev/gpt, not just swap[0-5].

GEOM shows all providers for a disk/partition that is not in use. Once you
acces a disk/partition via a particular provider, all others are hidden.
This is to prevent you from accessing a particular disk/paprtition via
multiple names.

For example, a GPT-partitioned disk could be accessed via the following GEOM
providers:
  /dev/ada0p1
  /dev/gptid/somelongstring
  /dev/gpt/some-label
  /dev/ufsid/someotherlongstring
  /dev/ufs/some-other-label

As soon as you mount the filesystem via one of those paths, all the rest are
hidden.

> are you accomplishing by wanting to use GPT labels for ZFS vdev member
> names?  I cease to see what this gains.  Here's why I say that.  Let's
> pretend the quirk/problem/bug/whatever above isn't happening and you
> have lots of entries referencing /dev/gpt/disk[0-5] in your pool.  Now
> one of these things happens:
>
> 1. Physical disk ada3 craps out.  You replace the disk with a brand new
> one.  You can tell ZFS about ada3p3.  Happy days.
>
> 2. Physical disk ada3 craps out.  You replace the disk and, for whatever
> reason, the device name changes.  Because it's a new/fresh disk with no
> data on it, even "zpool import" isn't going to see any ZFS metadata on
> it.
>
> Let's say the new device is called "ada9" -- you're going to have to
> partition this thing anyway manually with gpart to set up your
> freebsd-boot, freebsd-swap, and freebsd-zfs partitions, right?  Which
> means you already have to know in advance of those commands the disk is
> called "ada9".
>
> So you do your partitioning and you issue "zpool replace zroot ada3p3
> ada9p3".  Done.
>
> 3. Physical disk ada3 craps out.  You replace the disk and, for whatever
> reason, insert what you thought was a new disk but was actually a
> previously-used disk which already has your above partitioning scheme on
> it.  ZFS isn't going to magically start using the ZFS bits in ada3p3;
> you have to manually issue the command "zpool replace zroot ada3p3" and
> it will resilver/overwrite the "old" stuff that was there.
>
> I can continue to list off some more "sub-examples" of #3, but the fact
> of the matter is, ZFS on FreeBSD defaults to having pools with the
> "autoreplace" property disabled, so automatic resilver/rebuild won't
> happen on insertion.  In fact, I'm under the impression "autoreplace"
> doesn't work on FreeBSD (how would CAM/GEOM/etc. "inform" ZFS?)
>
> So these are the only 3 scenarios I can think of.  Am I missing one that
> somehow justifies the use of GPT labels named "diskX" when you already
> have things effectively called that (adaXpX)?  I don't see the
> positives.  Let me know, I'm quite curious.

You have a chassis with 24 drive bays in it, with the drives assigned to
multiple vdevs. You configure the box with FreeBSD X.Y and BIOS version Z.
Everything works well for many months, through many FreeBSD and BIOS
upgrades. And then a disk dies, ada3. One of the BIOS upgrades changed the
order that PCI slots are enumerated. One of the FreeBSD upgrades changed the
naming scheme for disk devices. And you can't remember if you numbered
you're disks horizontally or verticall in the chassis.

If only you had used some kind of labelling, in software and on the chassis.

Yes, I have lived through that. Physical disk device nodes should not be
relied upon as there are many ways in which they can change. But a label in
the disk's metadata should not change wiithout user intervention.

ZFS is very good about finding drives that have been moved around in a
system via export/import as it stores vdev information in metadata on the
drives.

Why not do something similar to make your life easie when things go awry?


More information about the freebsd-fs mailing list