ZFS thinks that a newly inserted emtpy disk is part of the pool
Attila Nagy
bra at fsn.hu
Thu Aug 2 18:48:32 UTC 2012
Hi,
What I do (did always on FreeBSD 8):
- wait for a disk to malfunction (its SCSI device disappears) or when I
know its bad (SMART info, checksum errors etc), pull it out from the
enclosure
- insert a new disk, straight from the shop (has a lot of null bytes on it)
- zpool replace pool daX when the device comes up again
This has previously resulted in zfs to resilver the replaced disk and
everything was OK.
We have switched those machines to 9 sometimes in the near past
(r237433) and the above has changed.
The disk disappears, gets physically replaced, reappears, and zpool
replace says now that the disk is already part of the pool. I can even
see a zfs signature on it with dd.
After rebooting the machine, I can issue the zpool replace command
without any problems, and zfs starts to rebuild its contents.
(I have no dd-data from this state, sorry)
Additional information which may be relevant: the drives are hooked up
to Smart Array (ciss) controllers, and they are RAID 0 volumes (one
logical drive per physical drive).
I thought about a ciss firmware bug (caching the zfs metadata even after
the disk has been replaced), but this is so weird, and should affect
both FreeBSD 8 and 9.
So from this I guess it's a FreeBSD bug, which I couldn't see on 8.
Any ideas about what could cause this?
More information about the freebsd-fs
mailing list