strange ZFS v28 states after disk upgrades/rebuilds
Dmitry Morozovsky
marck at rinet.ru
Sun Aug 21 12:06:24 UTC 2011
Dear colleagues,
I'm not sure how did I make this, but try to explain:
my home file server was fresh 8-stable/amd64, booted from CF, and ZFS-root with
5x1.5T raidz + ssd as cache. raidz was built on raw disks ad4..ad12. ZFS v28
I'm starting to upgrade disks to Hitachi 3T, now with GPT on them. First change
(ad12) worked seamlessly. Next two were not, some hangs, some reboots, some
unables to import pool on boot (latter each time disappeard after booting
single into CF /bootdisk, mount -u -w /, zpool import, reboot)
after last reboot from single user I've got (BTW, resilvering seems to survive
reboot, but can't report proper resilvering speed)
-- 8< --
root at hamster:~# zpool status
pool: hm
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Aug 21 13:12:38 2011
22.6G scanned out of 1.02T at 18/s, (scan is slow, no estimated time)
267M resilvered, 2.16% done
config:
NAME STATE READ WRITE CKSUM
hm DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
replacing-2 DEGRADED 0 0 0
13001111841528871597 UNAVAIL 0 0 0 was /dev/ad8
gptid/3962b8a3-cb6d-11e0-a2b4-0007e90d0cbb ONLINE 0 0 0 (resilvering)
replacing-3 DEGRADED 0 0 0
4143382663317400064 UNAVAIL 0 0 0 was /dev/ad10
6273508279307911610 UNAVAIL 0 0 0 was /dev/ad10
13164605370838846626 UNAVAIL 0 0 0 was /dev/ad10
gptid/fabf95d4-cb4a-11e0-bdbd-0007e90d0cbb ONLINE 0 0 0 (resilvering)
gptid/9faf12fa-ca5b-11e0-b59d-0007e90d0cbb ONLINE 0 0 0
cache
ad14h ONLINE 0 0 0
errors: 1 data errors, use '-v' for a list
-- 8< --
after a couple of hours
-- 8< --
root at hamster:~# zpool status -v
pool: hm
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Aug 21 13:12:38 2011
697G scanned out of 1.02T at 569/s, (scan is slow, no estimated time)
1.50G resilvered, 66.59% done
config:
NAME STATE READ WRITE CKSUM
hm DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
replacing-2 DEGRADED 0 0 0
13001111841528871597 UNAVAIL 0 0 0 was /dev/ad8
gptid/3962b8a3-cb6d-11e0-a2b4-0007e90d0cbb ONLINE 0 0 0 (resilvering)
replacing-3 DEGRADED 0 0 0
4143382663317400064 UNAVAIL 0 0 0 was /dev/ad10
6273508279307911610 UNAVAIL 0 0 0 was /dev/ad10
13164605370838846626 UNAVAIL 0 0 0 was /dev/ad10
gptid/fabf95d4-cb4a-11e0-bdbd-0007e90d0cbb ONLINE 0 0 0 (resilvering)
gptid/9faf12fa-ca5b-11e0-b59d-0007e90d0cbb ONLINE 0 0 0
cache
ad14h ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
/FreeBSD/ports.full/deskutils/horde-nag
-- 8< --
then disk activity stops, and zpool is locked:
-- 8< --
root at hamster:~# zpool status -v
load: 0.00 cmd: zpool 6300 [spa_namespace_lock] 6.02r 0.00u 0.00s 0% 2032k
-- 8< --
I have debugging kernel, and will be glad to produce more info to help reviving
my pool, and hopefully avoids such sad situations in the future.
Thanks in advance!
--
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: marck at FreeBSD.org ]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck at rinet.ru ***
------------------------------------------------------------------------
More information about the freebsd-fs
mailing list