Strange ZFS behaviour when a drive fail.
Ståle Kristoffersen
staale at kristoffersen.ws
Wed Mar 26 07:52:21 PDT 2008
Hi, I have a a zpool containing two raidz-set, and one of the drive died.
After a reboot the controller removed the disk and renumbered the other:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
mfid1 ONLINE 0 0 0
6496191869544847902 FAULTED 0 0 0 was /dev/mfid2
mfid2 ONLINE 0 0 0
mfid3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
mfid4 ONLINE 0 0 0
mfid5 ONLINE 0 0 0
mfid6 ONLINE 0 0 0
mfid7 ONLINE 0 0 0
I have since gotten the drive online (as mfid8) and now strangely zfs reports that it is resilvering:
scrub: resilver in progress, 1,11% done, 307445734561825779h49m to go
But to what drive?
zpool iostat shows that it reads alot from all the drives in the first raidz:
capacity operations bandwidth
pool used avail read write read write
----------------------- ----- ----- ----- ----- ----- -----
media 3,87T 949G 299 11 35,5M 51,4K
raidz1 3,39T 253G 283 9 35,4M 46,8K
mfid1 - - 148 15 11,8M 33,9K
6496191869544847902 - - 0 0 2,49K 0
mfid2 - - 147 15 11,8M 33,5K
mfid3 - - 147 16 11,8M 32,3K
raidz1 495G 695G 16 1 111K 4,63K
mfid4 - - 10 1 336K 2,70K
mfid5 - - 10 1 344K 2,70K
mfid6 - - 11 1 385K 2,64K
mfid7 - - 11 1 390K 2,22K
This seems strange to me.
PS: I then ran zpool replace media 6496191869544847902 mfid8 and got back:
cannot replace 6496191869544847902 with mfid8: mfid8 is busy
when I then ran zpool status I got:
scrub: resilver completed with 0 errors on Wed Mar 26 15:15:50 2008
Why did it start to resilver onto "nothing", and why can't I replace it with mfid8 (even with -f)?
--
Ståle Kristoffersen
More information about the freebsd-fs
mailing list