ZFS pool faulted (corrupt metadata) but the disk data appears ok...

Robert David robert at linsystem.net
Fri Feb 6 12:24:31 UTC 2015


I suggest booting to 10.1 livecd. 

Than check the partitions if they were created prior zfs:

$ gpart show mfid0

And than try to import pool as suggested.

Robert.

On Fri, 06 Feb 2015 12:21:04 +0100
Michelle Sullivan <michelle at sorbs.net> wrote:

> Xin Li wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA512
> >
> >
> >
> > On 2/5/15 18:20, Michelle Sullivan wrote:
> >   
> >> Xin Li wrote: On 02/05/15 17:36, Michelle Sullivan wrote:
> >>
> >>     
> >>>>>> This suggests the pool was connected to a different system,
> >>>>>> is that the case?
> >>>>>>
> >>>>>>
> >>>>>>             
> >>>>> No.
> >>>>>
> >>>>>           
> >> Ok, that's good.  Actually if you have two heads that writes to
> >> the same pool at the same time, it can easily enter an
> >> unrecoverable state.
> >>
> >>
> >>     
> >>>>>> It's hard to tell right now, and we shall try all possible 
> >>>>>> remedies but be prepared for the worst.
> >>>>>>
> >>>>>>             
> >>>>> I am :(
> >>>>>
> >>>>>           
> >> The next thing I would try is to:
> >>
> >> 1. move /boot/zfs/zpool.cache to somewhere else;
> >>
> >>
> >>     
> >>> There isn't one.  However 'cat'ing the inode I can see there was
> >>> one...
> >>>       
> >>> <83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@<F4>^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@<D0>^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> >>>
> >>>
> >>>       
> > ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> >   
> >> 2. zpool import -f -n -F -X storage and see if the system would
> >> give you a proposal.
> >>
> >>
> >>     
> >>> This crashes (without -n) the machine out of memory.... there's
> >>> 32G of RAM. /boot/loader.conf contains:
> >>>       
> >>> vfs.zfs.prefetch_disable=1 #vfs.zfs.arc_min="8G" 
> >>> #vfs.zfs.arc_max="16G" #vm.kmem_size_max="8" #vm.kmem_size="6G" 
> >>> vfs.zfs.txg.timeout="5" kern.maxvnodes=250000 
> >>> vfs.zfs.write_limit_override=1073741824 vboxdrv_load="YES"
> >>>       
> >
> > Which release this is?  write_limit_override have been removed quite a
> > while ago.
> >   
> 
> FreeBSD colossus 9.2-RELEASE-p15 FreeBSD 9.2-RELEASE-p15 #0: Mon Nov  3
> 20:31:29 UTC 2014    
> root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
> 
> 
> > I'd recommend using a fresh -CURRENT snapshot if possible (possibly
> > with -NODEBUG kernel).
> >   
> 
> I'm sorta afraid to try and upgrade it at this point.
> 
> Michelle
> 



More information about the freebsd-fs mailing list