ZFS panic on mdX-based raidz
Lapo Luchini
lapo at lapo.it
Fri Jun 15 08:45:30 UTC 2007
I have a vmcore, a couple of them in fact, but they don't seem to have a
valid bt in them.. dunno why, I'm not really into kgdb yet 0=)
(only backtrace visible is kern_shutdown, then only ??)
This is the list of operations with whom I can reproduce the panic at
will (or, at the very least, with whom I reproduced it three times):
rm data0 data1 data2
truncate -s 64M data0
truncate -s 128M data1
truncate -s 256M data2
mdconfig -f data0 -u 0
mdconfig -f data1 -u 1
mdconfig -f data2 -u 2
zpool create prova raidz md0 md1 md2
zfs create -o mountpoint=/usr/tmp/p prova/p
dd if=/dev/zero of=/usr/tmp/p/file bs=1M
zpool status
sysctl kern.geom.debugflags=16
dd if=/dev/zero of=/dev/md0 bs=1M
dd if=/dev/zero of=/dev/md1 bs=1M
zpool scrub prova
zpool status
Follows the status with two invalid disks (I wonder why :P) and a scrub
in progress; but the host will panic before the scrub ends.
7.0-CURRENT of June 11 2007 (with added destroy_dev_sched.6.patch and
destroy_dev_sched_addon.2.patch from the "smb wedges" thread).
Lapo
More information about the freebsd-fs
mailing list