Suggestions for working with unstable nvme dev names in AWS
George Hartzell
hartzell at alerce.com
Tue May 14 15:59:07 UTC 2019
Matthew Seaman writes:
> On 14/05/2019 03:35, George Hartzell wrote:
> > Can anyone speak to the current state of device names for nvme disks
> > on AWS using the FreeBSD 12 AMI's? Is name-stability an issue? If
> > so, is there a work-around?
>
> I don't know about device name stability in AWS instances, but if you
> are using ZFS, then shuffling the disks around should not make any
> difference. With physical hardware it should be possible to eg. pop the
> disks out of one chassis and insert them into another in whatever order,
> and the system will still boot correctly. This sounds like the virtual
> equivalent of that.
> [...]
Thanks for the response!
Yes, once I have them set up (ZFS or labeled), it doesn't matter what
device names they end up having. For now I just do the setup by hand,
poking around a bit. Same trick in the Linux world, you end up
referring to them by their UUID or ....
The tricky bit is the automated setup. Say I ask for two additional
devices, "this" and "that". I intend to use "this" for high
performance what-cha-macallit so I specify high IOPS and etc.... I
intend to use "that" for less important stuff, so I specify "lower"
performance.
Now as the machine's provisioning itself (e.g. Ansible), how can I
reliably decide which to `zpool create` or `glabel` or ... with which
names?
The Linux world worked around this with the `udev` rules and *etc*
that I described earlier.
There are hacky ways to work around it, I could ensure that they're
different sizes and use that to decide. I could do it in two stages.
*etc...*
I'm just wondering if there's a way to leverage the bit of info AWS
has tucked away for us.
g.
More information about the freebsd-questions
mailing list