ZFS panic under extreme circumstances (2/3 disks corrupted)

Freddie Cash fjwcash at gmail.com
Mon May 25 15:39:17 UTC 2009

On Mon, May 25, 2009 at 2:13 AM, Thomas Backman <serenity at exscape.org> wrote:
> On May 24, 2009, at 09:02 PM, Thomas Backman wrote:
>> So, I was playing around with RAID-Z and self-healing...
> Yet another follow-up to this.
> It appears that all traces of errors vanish after a reboot. So, say you have
> a dying disk; ZFS repairs the data for you, and you don't notice (unless you
> check zpool status). Then you reboot, and there's NO (easy?) way that I can
> tell to find out that something is wrong with your hardware!

On our storage server that was initially configured using 1 large
24-drive raidz2 vdev (don't do that, by the way), we had 1 drive go
south.  "zpool status" was full of errors.  And the error counts
survived reboots.  Either that, or the drive was so bad that the error
counts started increasing right away after a boot.  After a week of
fighting with it to get the new drive to resilver and get added to the
vdev, we nuked it and re-created it using 3 raidz2 vdevs each
comprised of 8 drives.

(Un)fortunately, that was the only failure we've had so far, so can't
really confirm/deny the "error counts reset after reboot".

Freddie Cash
fjwcash at gmail.com

More information about the freebsd-current mailing list