how to import raidz2, if only one disk is missing?
Vladislav V. Prodan
universite at ukr.net
Fri Jun 3 17:39:29 UTC 2011
24.05.2011 22:54, Vladislav V. Prodan wrote:
> how to change the status of "FAULTED" on "DEGRADED"?
>
Managed to partially lift the pool.
# zpool status tank
pool: tank
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank FAULTED 0 0 1 corrupted data
raidz2 DEGRADED 0 0 6
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
gpt/disk2 ONLINE 0 0 0
gpt/disk3 UNAVAIL 0 0 0 cannot open
gpt/disk4 ONLINE 0 0 0
ad18p1 ONLINE 0 0 0
# zdb tank
version=15
txg=0
pool_guid=17628573572433446879
vdev_tree
type='root'
id=0
guid=17628573572433446879
bad config type 16 for stats
children[0]
type='raidz'
id=0
guid=17179795338638175685
nparity=2
metaslab_array=14
metaslab_shift=35
ashift=9
asize=4500909195264
is_log=0
bad config type 16 for stats
children[0]
type='disk'
id=0
guid=1193943216826871140
path='/dev/gpt/disk0'
whole_disk=0
DTL=22
bad config type 16 for stats
children[1]
type='disk'
id=1
guid=15455958051005423086
path='/dev/gpt/disk1'
whole_disk=0
DTL=20
bad config type 16 for stats
children[2]
type='disk'
id=2
guid=8664568011785700035
path='/dev/gpt/disk2'
whole_disk=0
DTL=163
bad config type 16 for stats
children[3]
type='disk'
id=3
guid=8811702962298963660
path='/dev/gpt/disk3'
whole_disk=0
DTL=161
bad config type 16 for stats
children[4]
type='disk'
id=4
guid=6754554128830363882
path='/dev/gpt/disk4'
whole_disk=0
DTL=164
bad config type 16 for stats
children[5]
type='disk'
id=5
guid=16960671095707356147
path='/dev/ad18p1'
whole_disk=0
DTL=160
bad config type 16 for stats
name='tank'
state=0
timestamp=1306186996
hostid=143250101
hostname='mary-teresa.otrada.od.ua'
zdb: can't open tank: Input/output error
I put a new HDD to 2 terabytes, but the pool does not see gpt/disk3 :(
# ll /dev/gpt
total 0
crw-r----- 1 root operator 0, 93 3 июн 20:07 boot
crw-r----- 1 root operator 0, 97 3 июн 20:07 disk-system
crw-r----- 1 root operator 0, 103 3 июн 20:07 disk0
crw-r----- 1 root operator 0, 110 3 июн 20:07 disk1
crw-r----- 1 root operator 0, 119 3 июн 20:07 disk2
crw-r----- 1 root operator 0, 139 3 июн 20:15 disk3
crw-r----- 1 root operator 0, 117 3 июн 20:07 disk4
crw-r----- 1 root operator 0, 136 3 июн 20:27 disk5
crw-r----- 1 root operator 0, 95 3 июн 20:07 swap
What else recommend to restore the pool?
--
Vladislav V. Prodan
VVP24-UANIC
+380[67]4584408
+380[99]4060508
vlad11 at jabber.ru
More information about the freebsd-fs
mailing list