Strange ZFS pool failure after updating kernel v6->v13
Yan V. Batuto
yan.batuto at gmail.com
Sat Jun 6 11:54:35 UTC 2009
Hello!
RAID-Z v6 works OK with 7.2-RELEASE, but it fails with recent 7.2-STABLE.
--------------------------------------------------
# zpool status bigstore
pool: bigstore
state: ONLINE
scrub: scrub completed with 0 errors on Fri Jun 5 22:28:19 2009
config:
NAME STATE READ WRITE CKSUM
bigstore ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad10 ONLINE 0 0 0
errors: No known data errors
--------------------------------------------------
After cvsup to 7-STABLE, usual procedure of rebuilding kernel and
world, and reboot pool is failed.
It's quite strange that now pool consists of ad8, ad10, and again ad8,
ad10 drives instead of ad4, ad6, ad8, ad10.
I removed additional disk controller few weeks ago, so raid-z
originally was created as ad8+ad10+ad12+ad14, and then
it appeared to be ad4+ad6+ad8+ad10. It was not a trouble for zfs v6,
but, probably, something is wrong here in zfs v13.
--------------------------------------------------
# zpool status bigstore
pool: bigstore
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
bigstore UNAVAIL 0 0 0 insufficient replicas
raidz1 UNAVAIL 0 0 0 insufficient replicas
ad8 FAULTED 0 0 0 corrupted data
ad10 FAULTED 0 0 0 corrupted data
ad8 ONLINE 0 0 0
ad10 ONLINE 0 0 0
More information about the freebsd-fs
mailing list