13.1 RC-3 problem

From: Rick Summerhill <rrsum_at_summerhill.org>
Date: Thu, 21 Apr 2022 16:24:22 UTC
I'm running a 13.1 RC-3 server that has a zfs problem that didn't exist 
under 13.0 RELEASE.

First, here is the configuration of the server.  It has the operating 
system on an NVD drive with all the partitions UFS.  It has 8 UFS 
formatted drives in a SAS configuration.  All of these show up when 
rebooting.  I also have 2 drives in a ZFS mirror where the home 
directories are located and where the data in a MySQL database is 
located.  None of the ZFS datasets mount when rebooting.  After 
rebooting, if I do a "zpool import" all of the ZFS datasets mount.

Looking at dmesg after rebooting, it shows the following lines after the 
nvd0 drive shows up:

ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
pid 48 (zpool), jid 0, uid 0: exited on signal 6
pid 49 (zpool), jid 0, uid 0: exited on signal 6

Further on in dmesg, the other drives show up, the 8 sas drives and the
and the 2 zfs drives.  It appears ZFS is trying to configure itself, but 
can't know about its drives yet?

Do I have something misconfigured in 13.1?  It has worked flawlessly in 
13.0 for almost a year.


Rick Summerhill
Retired, Chief Technology Officer, Internet2
10233 Timberhill Rd
Manchester, MI 48158 USA

Home: 734-428-1422
Web:  http://www.rick.summerhill.org