Suggestions for working with unstable nvme dev names in AWS
hartzell at alerce.com
Tue May 14 02:35:30 UTC 2019
Newer AWS instances use nvme disk devices. On Linux these devices do
not have stable names, the volume that was nvme1 on one boot might be
nvme3 the next time you boot. There are a variety of tricks that pry
the originally requested device name (e.g. /dev/sdh) out of the device
and e.g. use udev rules to make symbolic links between that name and
the unstable name.
Details are here  in the AWS docs.
The name of the device that was requested (e.g. /dev/sdh) is stashed
in device (controller?) and accessible via a vendor-specific
> The device name is available through the NVMe controller
> vendor-specific extension (bytes 384:4095 of the controller
Amazon provides a python script in their Linux (here's a googled-up
copy ) that grubs around and finds it for you. On other Linux's
you can roll your own starting from `nvme id-ctrl`.
I'm working with the FreeBSD 12 ZFS AMI's and assume that they'll have
the same issues with unstable naming that Linux has. After a handful
of tries, I haven't yet seen things change order, but the negative
result isn't very comforting.
I don't see the originally requested name in the output of
`nvmecontrol identify /dev/nvme1`, or `nvmecontrol identify -x -v
/dev/nvme1` (or `/dev/nvme1ns1`).
On a flier, I tried the Linux script , but it fails with `[Errno
25] Inappropriate ioctl for device`. I can see a few points of
alignment between the Linux script and the FreeBSD nvme driver,
e.g. the `NVME_OPC_IDENTIFY` opcode value from the `nvme_admin_opcode`
enum seems to match the value of `NVME_ADMIN_IDENTIFY` on line 29 of
 but that's as far as I've unraveled it.
Can anyone speak to the current state of device names for nvme disks
on AWS using the FreeBSD 12 AMI's? Is name-stability an issue? If
so, is there a work-around?
More information about the freebsd-questions