Re: problem with zfs raidz3 install on an 8-disk system, can't find loader
- In reply to: Dave Cottlehuber: "Re: problem with zfs raidz3 install on an 8-disk system, can't find loader"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Thu, 07 Nov 2024 18:57:39 UTC
On Thu, Nov 07, 2024 at 12:58:32PM +0000, Dave Cottlehuber wrote:
>From inside the installer, it would be worth sharing with the list:
>
>- dmesg
>- sysctl machdep.bootmethod
>- relevant disk controller bits of `pciconf -lv`
>
>and comparing what you see when booting from h/w (not memstick.img)
>to see if it comes up in UEFI or BIOS mode.
>
>While dealing with these issues I would start out first with a
>single drive and see if FreeBSD will boot to "plain zfs" first.
>
>I'm not familiar with raidz3, but that's where I'd start off.
alas, not possible now as it's for production.
There's another, similar, machine I can test with though. I'll try that
and report back.
This other, similar machine has had freebsd since 12-current. it may have been
installed as ufs and subsequently zfs added for data. Need to check.
The particular machine (the one with the issue reported initially) was
running -current but it was using the hpa smartarray. Nothing special
was needed for the install back then.
The smartarray presented 2x devices as it was configured for
2* 4 hds in raid mirror. The zfs was then installed as stripe, resulting
in about (and this is from memory) 7Tb space.
I wantd HBA (read: JBOD) mode for this system so I had to destroy the
hardware array and put the card in HBA mode. When booted, the installer
saw all the individual disks and apparently made a raidz3 as configured
via the installer.
*Currently* the installer presents all the disks, no special driver needed.
What makes me think this is a bug is that everything can be added to the zpool
but when the system boots, it can't boot.
To get around the installer issue, I installed onto one of the disks as
ufs (da0). This proceeded normally.
Now with only 7 disks for zfs I made a raidz2:
% zpool status
pool: data
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
This appears to function normally.
The issue *appears* to be that the installer doesn't install the guts of
what is required to boot the array, despite configuring and apparently
installing everything else. This is similar to a long-standing issue
affecting installing zfs on arm64.
--