Re: ZFS boot devices specification

From: Karl Vogel <vogelke_at_pobox.com>
Date: Fri, 06 Sep 2024 06:01:29 UTC
>> On Thu, 05 Sep 2024 05:40:43 -0400,
>> Ludovit Koren <ludovit.koren@gmail.com> may have said:

> I cannot find the recommended way to specify the devices to boot
> from on 13.4-STABLE FreeBSD 13.4-STABLE stable/13-n258224-f702110bc4bc.
> I am using internal disks for the system and external for data.

> The problem occurs when the disks from data array are assigned to
> /dev/da0 and /dev/da1.  The booting process starts and finishes on
> remounting root RW, because it cannot find root file system.
> It depends on the order of hardware initialization.

> What is the recommended way of specifying internal disks in ZFS,
> something like disks uid? The mount process tries to mount ZFS
> gpt/zfs0 and gpt/zfs1 (which are shown in zpool status), but they are
> not on /dev/da0 and /dev/da1.

When I installed 13.2, I had a problem finding the root filesystem.
I got this far, did the reboot thing...

    Install FreeBSD Handbook?       yes (requires network)
        Language (en)               <empty>

    Final configuration             Exit
    Open shell                      No
    Complete                        Reboot

...and I ran into this:

    Mount from zroot/ROOT/default failed with error 6
    mountroot>

The default setting was vfs.root.mountfrom=zfs:zroot/ROOT/default

I fired up Live CD and ran gpart to see where things wound up -- abbreviated
for readability:

    root# gpart list
    Geom name: ada0

        Providers:
        1. Name: ada0p1
           Mediasize: 272629760 (260M)
           label: efiboot0
           type: efi

        2. Name: ada0p2
           Mediasize: 524288 (512K)
           label: gptboot0
           type: freebsd-boot

        3. Name: ada0p3
           Mediasize: 4294967296 (4.0G)
           label: swap0
           type: freebsd-swap

        4. Name: ada0p4
           Mediasize: 995635494912 (927G)
           label: zfs0
           type: freebsd-zfs

        Consumers:
        1. Name: ada0
           Mediasize: 1000204886016 (932G)
           Sectorsize: 512
           Mode: r3w3e6

    Geom name: ada1

        Providers:
        1. Name: ada1p1
           Mediasize: 272629760 (260M)
           label: efiboot1
           type: efi

        2. Name: ada1p2
           Mediasize: 524288 (512K)
           label: gptboot1
           type: freebsd-boot

        3. Name: ada1p3
           Mediasize: 4294967296 (4.0G)
           label: swap1
           type: freebsd-swap

        4. Name: ada1p4
           Mediasize: 995635494912 (927G)
           label: zfs1
           type: freebsd-zfs

        Consumers:
        1. Name: ada1
           Mediasize: 1000204886016 (932G)

    Geom name: ada2                 [this is from the previous OS]

        Providers:
        1. Name: ada2p1
           label: (null)
           type: linux-lvm

        Consumers:
        1. Name: ada2
           Mediasize: 3000592982016 (2.7T)

    Geom name: ada3

        Providers:
        1. Name: ada3p1
           Mediasize: 209715200 (200M)
           label: efiboot3
           type: efi

        2. Name: ada3p2
           Mediasize: 2147483648 (2.0G)
           label: swap3
           type: freebsd-swap

        3. Name: ada3p3
           Mediasize: 2998234251264 (2.7T)
           label: zfs3
           type: freebsd-zfs

        Consumers:
        1. Name: ada3
           Mediasize: 3000592982016 (2.7T)

    Geom name: da0      [This is my SSD]

        Providers:
        1. Name: da0p1
           Mediasize: 272629760 (260M)
           label: efiboot2
           type: efi

        2. Name: da0p2
           Mediasize: 524288 (512K)
           label: gptboot2
           type: freebsd-boot

        3. Name: da0p3
           Mediasize: 4294967296 (4.0G)
           label: swap2
           type: freebsd-swap

        4. Name: da0p4
           Mediasize: 995635494912 (927G)
           label: zfs2
           type: freebsd-zfs

        Consumers:
        1. Name: da0
           Mediasize: 1000204886016 (932G)

Turns out that "zpool import" can use the gpart stuff to import existing
pools for use:

    root# zpool import
      pool: zroot
     state: ONLINE
    action: can be imported using pool name or numeric ID

    config:
         NAME        STATE
         zroot       ONLINE
           mirror-0  ONLINE
             ada0p4  ONLINE
             ada1p4  ONLINE
             da0p4   ONLINE

I tried "zpool import zroot" but the system said the a pool with that
name already exists.  Running "zpool list" crashed immediately.

    mountroot> zfs:zroot/ada0p4

also failed.  Booted from DVD and used a different pool name:

    mountroot> cd9660:/dev/cd0 ro
    (press return for /bin/sh)

    # zpool import newroot

When booting from DVD, you can also go to Shell and run this:

    # zfs mount newroot/ROOT/default

It mounted as /, which conflicted with the DVD but at least let me modify
files.  I editied /boot/loader.conf and added the mountfrom= line:

    root# cp /boot/loader.conf /boot/loader.conf.orig
    root# vi /boot/loader.conf

    root# cat /boot/loader.conf
    kern.geom.label.disk_ident.enable="0"
    kern.geom.label.gptid.enable="0"
    cryptodev_load="YES"
    zfs_load="YES"
    vfs.root.mountfrom="zfs:newroot/ROOT/default"

After that, the system came up without problems.  Hope this helps.

-- 
Karl Vogel                      I don't speak for anyone but myself

"I know it was you, Fredo. You broke my heart."
            --What Taylor Swift whispered to Travis Kelce, 30 Jan 2024