ZFS - Unable to offline drive in raidz1 based pool
Kurt Touet
ktouet at gmail.com
Tue Sep 22 19:30:31 UTC 2009
On Tue, Sep 22, 2009 at 6:56 AM, Pawel Jakub Dawidek <pjd at freebsd.org> wrote:
>
> Could you send the output of:
>
> # apply "zdb -l /dev/%1" ad{4,6,12,14}
>
> --
> Pawel Jakub Dawidek http://www.wheel.pl
> pjd at FreeBSD.org http://www.FreeBSD.org
> FreeBSD committer Am I Evil? Yes, I Am!
>
I was looking back at the thread, and realized that you had replied to
my first message and not the subsequent one (where I had successfully
scrubbed and resilvered the drive) -- so the debug output was from the
properly resilvered array.
Although the one question that still stands (for me), is how the
system would have reported itself as healthy after I successfully
reattached the failing driving. It strikes me as the type of
situation where a checksum error or degraded status should appear. Am
I wrong in thinking that, or is there another way in which this could
be detected? Looking at James comment, if the one drive had an older
txtag, should that have generated a non-healthy state?
Cheers,
-kurt
More information about the freebsd-fs
mailing list