AW: AW: AW: AW: AW: ZFS: Corrupted pool metadata after adding vdev to a pool - no opportunity to rescue data from healthy vdevs? Remove a vdev? Rewrite metadata?

Thomas Göllner (Newsletter) Newsletter at goelli.de
Mon Sep 17 16:29:45 UTC 2012


> If you can afford putting your drives aside you can try to wait before some tool occasionally emerges. I will not promise anything
> but I'm slowly making some progress with my script. I'm motivated about that as I have broken pool with photos. Trying to import
> that pool is causing a core dump on any system I tested like OpenSolaris, Illumos or SystemRescueCD.

It would be great if you script would be able to deal with pools with broken labels. I will put the three 3TB disks aside and use the old 1.5TB disks instead. So if there is some progress in your script or someone else is gonna write some tool for restoring labels or reading data of broken pools, perhaps I can get some data back. I think it would take some time to get this fresh 3TB pool full ;-)

This would also solve the next problem I discovered...
These 1.5TB disks have 512byte sectors. I have one spare. If the second disk falls out, first I thought, I will replace it with a 4TB disk and so on until I have replaced all of them. So I can expand the pool. But as I read now, this is not possible, isn't it? Because the 4TB drives would have 4k sectors.



More information about the freebsd-fs mailing list