Suggestions for working with unstable nvme dev names in AWS

George Hartzell hartzell at alerce.com
Tue May 14 21:01:30 UTC 2019


Matthias Oestreicher writes:
 > Am Dienstag, den 14.05.2019, 12:24 -0700 schrieb George Hartzell:
 > > Polytropon writes:
 > >  > On Tue, 14 May 2019 08:59:01 -0700, George Hartzell wrote:
 > >  > > Matthew Seaman writes:
 > >  > >  > [...] but if you
 > >  > >  > are using ZFS, then shuffling the disks around should not make any
 > >  > >  > difference. 
 > >  > >  > [...]
 > >  > > Yes, once I have them set up (ZFS or labeled), it doesn't matter what
 > >  > > device names they end up having.  For now I just do the setup by hand,
 > >  > > poking around a bit.  Same trick in the Linux world, you end up
 > >  > > referring to them by their UUID or ....
 > >  > 
 > >  > In addition to what Matthew suggested, you could use UFS-IDs
 > >  > in case the disks are initialized with UFS. You can find more
 > >  > information here (at the bottom of the page):
 > >  > [...]
 > > 
 > > Yes.  As I mentioned in my response to Matthew, once I have some sort
 > > of filesystem/zpool on the device, it's straightforward (TMTOWTDI).
 > > 
 > > The problem is being able to provision the system automatically
 > > without user intervention.
 > > [...]
 > Hei,
 > I'm not familiar with Amazon's AWS, but if your only problem is shiftig device
 > names for UFS filesystems, then on modern systems, GPT labels is the way to go.
 > [...]

Yes, yes, and yes.  I do appreciate all of the answers but I
apparently haven't made clear the point of my question.  I think that
you've all explained ways that I can log in and set things up manually
so that things work as they should for the rest of time.

You (Matthias) suggested that I could just:

> ```
> # gpart modify -l mylabel -i N /dev/nvm1
> ```

But how do I know which of the devices is the one that I'd like
labeled 'mylabel' and which is the one that I'd like labeled 'blort'?

Another way to explain my situation might be to ask how I can automate
applying the labels.

Imagine that in my automated creation of the instance, I asked for two
additional devices, a big-slow one which I asked to be called
`/dev/sdh` and a small-fast one that I asked to be called `/dev/sdz`.

But when I boot, I find that I have two devices (in addition to the
root device), `/dev/nvme1` and `/dev/nvme2`.  There's no way to know
which is the big-slow one that I wanted to call `/dev/sdh` and which
is the small-fast `/dev/sdz`.  In fact, if I reboot the machine,
sometimes the big-slow one will be `/dev/nvme1` and sometimes it will
be `/dev/nvme2`.

Given that situation, how do you write an automated script that will
label the big-slow one `backups` and the small-fast one `speedy`?

In the Linux world, `ebsnvme-id` & `udev` rules create symlinks at
boot time that link the names that I requested to whatever the device
is currently named.  That makes writing the script easy.

We lack `ebsnvme-id` and our nvme driver doesn't seem to have any
knowledge of AWS' tricksy trick.  Or perhaps not and I've just missed
out how we do it.

Thanks (seriously) for all the answer,

g.


More information about the freebsd-questions mailing list