ZFS panic under extreme circumstances (2/3 disks corrupted)

Ivan Voras ivoras at freebsd.org
Sun May 24 23:24:34 UTC 2009

Thomas Backman wrote:
> On May 24, 2009, at 09:02 PM, Thomas Backman wrote:
>> So, I was playing around with RAID-Z and self-healing, when I decided
>> to take it another step and corrupt the data on *two* disks (well,
>> files via ggate) and see what happened. I obviously expected the pool
>> to go offline, but I didn't expect a kernel panic to follow!
>> What I did was something resembling:
>> 1) create three 100MB files, ggatel create to create GEOM providers
>> from them
>> 2) zpool create test raidz ggate{1..3}
>> 3) create a 100MB file inside the pool, md5 the file
>> 4) overwrite 10~20MB (IIRC) of disk2 with /dev/random, with dd
>> if=/dev/random of=./disk2 bs=1000k count=20 skip=40, or so (I now know
>> that I wanted *seek*, not *skip*, but it still shouldn't panic!)
>> 5) Check if the md5 of file: everything OK, zpool status shows a
>> degraded pool.
>> 6) Repeat step #4, but with disk 3.
>> 7) zpool scrub test
>> 8) Panic!
>> [...]
> FWIW, I couldn't replicate this when using seek (i.e. corrupt the middle
> of the "disk" rather than the beginning):

Did you account for the time factor? Between your steps 5 and 6,
wouldn't ZFS automatically begin data repair?

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 258 bytes
Desc: OpenPGP digital signature
Url : http://lists.freebsd.org/pipermail/freebsd-current/attachments/20090524/106f47f8/signature.pgp

More information about the freebsd-current mailing list