[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Reply: bugzilla-noreply_a_freebsd.org: "[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 22 Apr 2022 18:27:31 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263473
Bug ID: 263473
Summary: ZFS drives fail to mount datasets when rebooting -
13.1-RC4
Product: Base System
Version: 13.1-STABLE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: misc
Assignee: bugs@FreeBSD.org
Reporter: rrsum@summerhill.org
I'm running a 13.1 RC-4 server that has a zfs problem that didn't exist under
13.0 RELEASE.
First, here is the configuration of the server. It has the operating system on
an NVD drive with all the partitions UFS. It has 8 UFS formatted drives in a
SAS configuration. All of these show up when rebooting. I also have 2 drives
in a ZFS mirror where the home directories are located and where the data in a
MySQL database is located. None of the ZFS datasets mount when rebooting.
After rebooting, if I do a "zpool import" all of the ZFS datasets mount.
Looking at dmesg after rebooting, it shows the following lines after the nvd0
drive shows up:
Trying to mount root from ufs:/dev/nvd0p2 [rw]...
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
pid 48 (zpool), jid 0, uid 0: exited on signal 6
pid 49 (zpool), jid 0, uid 0: exited on signal 6
Further on in dmesg, the other drives show up, the 8 sas drives and the
and the 2 zfs drives. It appears ZFS is trying to configure itself, but can't
know about its drives yet?
It has worked flawlessly in 13.0 for almost a year. Note also that each of
the sas drives and each of the sata drives for zfs has a gpart label and fstab
uses those labels. However, since nvd0 is the only such "drive" in the box, and
it does not have a label.
--
You are receiving this mail because:
You are the assignee for the bug.