Degraded zpool cannot detach old/bad drive

Rumen Telbizov telbizov at gmail.com
Thu Nov 18 02:16:11 UTC 2010


Hi jhell, everyone,

Thanks for your feedback and support everyone.
Indeed after successfully disabling /dev/gptid/* zfs managed to find all the
gpt/ labels
without a problem and the array looked exactly the way it did in the very
beginning.
So at that point I could say that I was able to fully recover the array
without data
loss to exactly the state it was in the beginning of its creation. Not
without adventure though ;)

Ironically due to some other reasons just after I fully recovered it I had
to destroy it
and rebuild from scratch with raidz2 vdevs (of 8 disks) rather than raidz1s
(of 4 disks) ;)
Basically I need better redundancy so that I can handle double disk failure
in a vdev. Seems
like the chance of a second disk failing while rebuilding the zpool for like
15 hours on those
2TB disks is quite significant.

I wonder if this conversion will reduce the IOPs of the pool in half ...

Anyway, thank you once again. Highly appreciated. I hope this is a helpful
piece of
discussion for other people having similar problems.

Cheers,
Rumen Telbizov



On Tue, Nov 16, 2010 at 8:55 PM, jhell <jhell at dataix.net> wrote:

> On 11/16/2010 16:15, Rumen Telbizov wrote:
> > It seems like *kern.geom.label.gptid.enable: 0 *does not work anymore? I
> am
> > pretty sure I was able to hide all the /dev/gptid/* entries with this
> > sysctl variable before but now it doesn't quite work for me.
>
> I could be wrong but I believe that is more of a loader tuneable than a
> sysctl that should be modified at run-time. Rebooting with this set to 0
> will disable showing the /dev/gptid directory.
>
> This makes me wonder if those sysctl's should be marked read-only at
> run-time. Though you could even rm -rf /dev/gptid ;)
>
> --
>
>  jhell,v
>



-- 
Rumen Telbizov
http://telbizov.com


More information about the freebsd-stable mailing list