Re: ZFS: zpool status on degraded pools (FreeBSD12 vs FreeBSD13)

From: Dave Baukus <daveb_at_spectralogic.com>
Date: Wed, 14 Jul 2021 21:45:50 UTC
On 7/14/21 3:21 PM, Alan Somers wrote:
This message originated outside your organization.
________________________________
On Wed, Jul 14, 2021 at 3:10 PM Dave Baukus <daveb@spectralogic.com<mailto:daveb@spectralogic.com>> wrote:
I'm seeking comments on the following 2 difference in the behavior of ZFS.
The first, I consider a bug; the second could be a bug or a conscious choice:

1) Given a pool of 2 disks and one extra disk exactly the same as the 2 pool members (no ZFS labels on the extra disk),
power the box off, replace one pool disk with extra disk in the same location; power box back on.

The pool is state on FreeBSD13 is ONLINE vs DEGRADED on FreeBSD12:

I agree, the FreeBSD 13 behavior seems like a bug.

2.) Add a spare to a degraded pool and issue a zpool replace to activate the spare.
On FreeBSD13 after the resilver is complete, the pool remains degraded until the degraded disk
is removed via zpool detach; on Freebsd12, the pool becomes ONLINE when the resilver is complete:

I agree.  I think I prefer the FreeBSD 13 behavior, but either way is sensible.

The change is no doubt due to the OpenZFS import in FreeBSD 13.  Have you tried to determine the responsible commits?  They could be regressions in OpenZFS, or they could be bugs that we fixed in FreeBSD but never upstreamed.
-Alan

Thanks for the feedback Alan. I have not yet dug into #1 beyond zpool, lib[zpool|zfs].

--

Dave Baukus