ZFS: Device names in a raidz1 pool after changing controllers
Ben RUBSON
ben.rubson at gmail.com
Sat Aug 12 07:25:19 UTC 2017
> On 12 Aug 2017, at 05:32, Chris Ross <cross+freebsd at distal.com> wrote:
>
>
>> On Aug 11, 2017, at 02:24 , Vitalij Satanivskij <satan at ukr.net> wrote:
>>
>>
>> Hello
>>
>> for disabling diskid
>> kern.geom.label.disk_ident.enable="0"
>>
>> same for gptid
>> kern.geom.label.gptid.enable="0"
>>
>> same for gpt label
>> kern.geom.label.gpt.enable="0"
>>
>> In /boot/loader.conf
>>
>> Just choose how you prefer
>>
>> And if tank isn't boot pool you can export and import with -d option to choose which naming of devices to use (eg /dev/gpt /dev/diskid etc)
>
> Okay. I thought I would try this last bit. However, only the first two disks (still ada0 and ada1) list partitions in /dev/gpt, because I just used the entirety of the other two disks I guess. And, only the other two disks have shown up in /dev/diskid. (nb, later research shows the other disks when I run “gpart list”, but they have “(null)" labels, as do their one partitions)
>
> So, "import -d /dev/gpt” doesn’t find anything (because none of tank’s disks are there), and “import -d /dev/diskid” finds the same as it configured automatically, with ada1p4 and the two diskid’s. Only /dev/gptid does what you describe above, where it lists all three by gptid, but I would prefer not to do that atm.
>
> I was hoping to get “ada1p4” “d0p1” and “d1p1”. If I ls /dev, I see:
>
> # ls -l /dev/ada* /dev/da*
> crw-r----- 1 root operator 0x5f Aug 11 01:20 /dev/ada0
> crw-r----- 1 root operator 0x60 Aug 11 01:20 /dev/ada0p1
> crw-r----- 1 root operator 0x61 Aug 11 01:20 /dev/ada0p2
> crw-r----- 1 root operator 0x62 Aug 11 01:20 /dev/ada0p3
> crw-r----- 1 root operator 0x64 Aug 11 01:20 /dev/ada1
> crw-r----- 1 root operator 0x6b Aug 11 01:20 /dev/ada1p1
> crw-r----- 1 root operator 0x6c Aug 11 01:20 /dev/ada1p2
> crw-r----- 1 root operator 0x6d Aug 11 01:20 /dev/ada1p3
> crw-r----- 1 root operator 0x6e Aug 11 01:20 /dev/ada1p4
> crw-r----- 1 root operator 0x65 Aug 11 01:20 /dev/da0
> crw-r----- 1 root operator 0x70 Aug 11 23:20 /dev/da0p1
> crw-r----- 1 root operator 0x66 Aug 11 01:20 /dev/da1
> crw-r----- 1 root operator 0x76 Aug 11 23:20 /dev/da1p1
>
>
> So I’d think it would work, but, both “zpool import” and “zpool import -d /dev” both show tank as:
>
> tank ONLINE
> raidz1-0 ONLINE
> ada1p4 ONLINE
> diskid/DISK-WOL240261932p1 ONLINE
> diskid/DISK-WOL240261922p1 ONLINE
>
>
> Let me know if there’s something else I can try. Otherwise, I may just try putting gpt labels on the other partitions. But, I have more controller swapping soon, so it’s mostly just informational at the moment.
Hello,
Try adding -o cachefile=none to the import command.
As you mention, you should however add labels to your disks and make ZFS import them by label.
ada/da may not be consistent over reboot, so label are a much better way to quickly identify (failed) disks.
Ben
More information about the freebsd-fs
mailing list