kern/148655: [zfs] Booting from a degraded raidz no longer works in 8-STABLE [regression]

Martin Matuska mm at FreeBSD.org
Thu Aug 5 12:00:24 UTC 2010


The following reply was made to PR kern/148655; it has been noted by GNATS.

From: Martin Matuska <mm at FreeBSD.org>
To: bug-followup at FreeBSD.org, Andriy Gapon <avg at icyb.net.ua>
Cc:  
Subject: Re: kern/148655: [zfs] Booting from a degraded raidz no longer works
 in 8-STABLE [regression]
Date: Thu, 05 Aug 2010 13:53:44 +0200

 So I have done more code reading and debugging with mfsBSD in virtualbox
 and I came to the following conclusion:
 
 sys/boot/zfs/zfsimpl.c reads vdev information from the pool but there is
 no check if these vdevs do exist on physical devices. In other words, if
 the pool has last seen its vdevs as HEALTHY, gptzfsboot assumes all of
 them are available.
 
 So this way e.g. in case of a mirror, the vdev_mirror_read() tries to
 read from the first "healthy" vdev in its list. If the first vdev is the
 missing vdev (e.g. a disconnected or failed drive), it just cannot read
 from it so you are unable to boot.
 
 In my test setup, vdev_mirror_read() reported two healty kids and tried
 to read from the non-existing vdev.
 
 I think in the boot case, we should first scan for all physically
 available vdevs, then scan for children from their configuration. All
 child vdevs that cannot be physically opened (do not have a
 representation from the previous scan) should be set to state
 VDEV_STATE_CANT_OPEN and not assumed as VDEV_STATE_HEALTHY.


More information about the freebsd-fs mailing list