ZFS: Corrupted pool metadata after adding vdev to a pool - no
opportunity to rescue data from healthy vdevs? Remove a vdev?
Rewrite metadata?
Thomas Göllner (Newsletter)
Newsletter at goelli.de
Tue Sep 11 11:08:20 UTC 2012
Hi all,
I recently crashed my pool with adding a new vdev to my pool.
Im running NAS4Free 9.0.0.1 - Sandstorm (Revision 188). My Pool
"GoelliZFS1" has one vdv - a raidz out of 3 discs á 3TB. As I needed more
space I put 3 discs á 1.5TB in the mashine and created a new raidz vdev. Now
something must have happened when I added the new vdev to the existing pool.
I think somehow the disclables got mixed up or something. Because after
adding the vdev my pool had a capacity of 16TB o_O Until that point I did
everything via webGUI.
I thought a restart could help, but after that my pool was gone.
Now I did some reading and tried via CLI over SSH. I don't want to put the
whole log here, because it might be to long. I'll give shortup and if you
want to know more, just ask ;-)
With "zpool import" I can see my pool. I checked the smart logs to verify
the disc names. Options -F and -X didn't help. With option -V the pool was
imported, but still faulty.
goelli-nas4free:~# zpool import -faV
goelli-nas4free:~# zpool status
pool: GoelliZFS1
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be
opened.
action: Destroy and re-create the pool from
a backup source.
see: http://www.sun.com/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE READ WRITE CKSUM
GoelliZFS1 FAULTED 1 0 0
missing-0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada4 ONLINE 0 0 0
I used "zdb -l" for all discs. There are all 4 Labels on each disc.
"zdb" also gave me some feedback (too long to post).
I'm sure now, that my data is on the discs. After adding the new vdev I had
nothing changed.
So there must be a chance to tell the zfs to dissmiss the wrong entry in the
metadata - or to edit the metadata myself...
When I think of the following case, you would agree that there has to be an
opportunity to detach vdevs...
If you have a pool of 4 vdevs, which is full of data, you are supposed to
add more space to the pool by adding a new vdev, right? If now, for some
reason, after a short time and not much new data the new attached vdev
completely fails - what do you do? ZFS is allways consistend on-disk. ZFS
has copy-on-write, so no data is changed until it's touched. In this case
you have a pool with 4 healthy vdevs with all you data and one faulty vdev
with almost no data. And you get the message to discard all your data,
destroy the pool and roll-back from backup?! Somehow rediculous, right?
I hope someone can tell me what I can try to do. I will appreachiate any
kind of help...
Greetings,
Thomas
More information about the freebsd-fs
mailing list