ZFS and reordering drives
Baldur Gislason
baldur at foo.is
Sat Dec 5 18:41:13 UTC 2009
I have managed to import the pool in degraded mode, however I am having problems
getting it back into normal operating mode.
I can detach the drive either by pulling the sata cable or by using atacontrol
but I don't have any ways of reattaching the drive to a running system.
Running atacontrol attach on the ata channel after detaching it just gives an error,
running atacontrol reinit on the ata channel after reconnecting the physically disconnected
drive doesn't find the drive.
And there doesn't seem to be any option in zpool to forcefully destroy a device in a zpool1,
forcefully degrading the pool, and the pool refuses to cooperate when the drive is attached.
I even tried putting the drive on a USB->SATA controller but zpool wouldn't let me replace
it that way, saying it was too small.
What to do, what to do?
Baldur
On Sat, Dec 05, 2009 at 11:11:09AM -0600, James R. Van Artsdalen wrote:
> This is beyond what I know - someone else will need to step in.
>
> If it's a raidz1 with one bad disk you can probably just unplug the bad disk and import the pool DEGRADED (due to the missing disk).
>
> ----- Original Message -----
> From: "Baldur Gislason" <baldur at foo.is>
> To: freebsd-fs at freebsd.org
> Sent: Saturday, December 5, 2009 11:04:00 AM
> Subject: Re: ZFS and reordering drives
>
> Ok. Running zdb -l on the four drives seems to indicate that one of them
> has some label issues.
> http://foo.is/~baldur/brokenzfs/
> ad4, ad6 and ad20 all have identical labels, only differences are the ids of the disk
> holding the label, as expected.
> root at enigma:~# diff ad4.label ad6.label
> 12c12
> < guid=12923783381249452341
> ---
> > guid=972519640617937764
> 61c61
> < guid=12923783381249452341
> ---
> > guid=972519640617937764
> 110c110
> < guid=12923783381249452341
> ---
> > guid=972519640617937764
> 159c159
> < guid=12923783381249452341
> ---
> > guid=972519640617937764
> root at enigma:~# diff ad4.label ad20.label
> 12c12
> < guid=12923783381249452341
> ---
> > guid=10715749107930065182
> 61c61
> < guid=12923783381249452341
> ---
> > guid=10715749107930065182
> 110c110
> < guid=12923783381249452341
> ---
> > guid=10715749107930065182
> 159c159
> < guid=12923783381249452341
> ---
> > guid=10715749107930065182
>
> ad18 has a somewhat broken label. Label 0 and 1 exist identical to the labels on the rest
> label 2 and 3 are broken or nonexistant.
> --------------------------------------------
> LABEL 2
> --------------------------------------------
> failed to unpack label 2
> --------------------------------------------
> LABEL 3
> --------------------------------------------
> failed to unpack label 3
>
> How should I go about recovering this?
>
> Baldur
>
> On Sat, Dec 05, 2009 at 04:39:44PM +0000, Baldur Gislason wrote:
> > Ok. The pool that was degraded imported cleanly but the pool that went
> > unavailable won't import.
> > If it is of any significance, I did change the BIOS disk controller settings
> > from IDE to AHCI and then back to IDE before I noticed this pool was gone.
> >
> > root at enigma:~# zpool import zirconium
> > cannot import 'zirconium': invalid vdev configuration
> >
> > pool: zirconium
> > id: 16708799643457239163
> > state: UNAVAIL
> > status: The pool is formatted using an older on-disk version.
> > action: The pool cannot be imported due to damaged devices or data.
> > config:
> >
> > zirconium UNAVAIL insufficient replicas
> > raidz1 UNAVAIL corrupted data
> > ad4 ONLINE
> > ad6 ONLINE
> > ad18 ONLINE
> > ad20 ONLINE
> >
> > How do I go about debugging this?
> >
> > Baldur
> >
> >
> > On Sat, Dec 05, 2009 at 11:33:33AM -0500, Gary Corcoran wrote:
> > > James R. Van Artsdalen wrote:
> > > > Baldur Gislason wrote:
> > > >> When I plugged them back in they didn't go in the right order
> > > >> and now both of my pools are broken.
> > > > zpool.cache is broken. Rename /boot/zfs/zpool.cache so that ZFS won't
> > > > load it, then import the pools manually. (a reboot might be needed
> > > > before the import; not sure).
> > >
> > > If one were booting from ZFS, would you be out of luck (since you wouldn't
> > > be able to access the zpool.cache before booting), or is there a way
> > > around this problem? Just wondering, I've avoided booting from ZFS so far.
> > >
> > > > The problem is that ZFS is recording the boot-time assigned name
> > > > (/dev/ad0) in the cache. I'm hoping to get GEOM to put the disk serial
> > > > number in /dev, i.e., /dev/serialnum/5LZ958QL. If you created the pool
> > > > using serial numbers then the cache would always work right.
> > >
> > > Is there any way today, to avoid using the boot assigned drive name (e.g.
> > > /dev/ad2) when creating the zpool? Again just wondering, I don't need
> > > a solution this year...
> > >
> > > Thanks,
> > > Gary
> > >
> > >
> > > _______________________________________________
> > > freebsd-fs at freebsd.org mailing list
> > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> > _______________________________________________
> > freebsd-fs at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
More information about the freebsd-fs
mailing list