ZFS pool faulted (corrupt metadata) but the disk data appears ok...

Michelle Sullivan michelle at sorbs.net
Fri Feb 6 02:20:36 UTC 2015


Xin Li wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
>
> On 02/05/15 17:36, Michelle Sullivan wrote:
>   
>>> This suggests the pool was connected to a different system, is
>>> that the case?
>>>
>>>       
>> No.
>>     
>
> Ok, that's good.  Actually if you have two heads that writes to the
> same pool at the same time, it can easily enter an unrecoverable state.
>
>   
>>> It's hard to tell right now, and we shall try all possible
>>> remedies but be prepared for the worst.
>>>       
>> I am :(
>>     
>
> The next thing I would try is to:
>
> 1. move /boot/zfs/zpool.cache to somewhere else;
>   

There isn't one.  However 'cat'ing the inode I can see there was one...

<83>^LR^@^L^@^D^A.^@^@^@<80>^LR^@<F4>^A^D^B..^@^@<89>^LR^@^X^@^H^Ozpool.cache.tmp^@<89>^LR^@<D0>^A^H^Kzpool.cache^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> 2. zpool import -f -n -F -X storage and see if the system would give
> you a proposal.
>   

This crashes (without -n) the machine out of memory.... there's 32G of
RAM. /boot/loader.conf contains:

vfs.zfs.prefetch_disable=1
#vfs.zfs.arc_min="8G"
#vfs.zfs.arc_max="16G"
#vm.kmem_size_max="8"
#vm.kmem_size="6G"
vfs.zfs.txg.timeout="5"
kern.maxvnodes=250000
vfs.zfs.write_limit_override=1073741824
vboxdrv_load="YES"

Regards,

Michelle

PS: it's 16x3T Drives in RAIDZ2+HSP - 34T formatted.

> Cheers,
> - -- 
> Xin LI <delphij at delphij.net>    https://www.delphij.net/
> FreeBSD - The Power to Serve!           Live free or die
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.1.1 (FreeBSD)
>
> iQIcBAEBCgAGBQJU1BxLAAoJEJW2GBstM+nsOJ0P/3be8Z1WsGGOGNY+WZdr7FRp
> Jl++Ef3VSpd1Qf1jFZuRIS/hLfbMh0bWjOxyKiF9ivu77QZ9qCXk+pmn0oTZ3e1r
> 7g80CRKk2rapTqkagFRuPfo6b9vDQz3qYazahhZrhRyTFA1l2V+Wka+yw9Hx18ds
> MLaAps7Kpn67BRRV6Q+9+/oQdBzllSx8S77AkesPp5s3oHTQ8jntSSN9D9p/+jQu
> Wo0/t4k7x3pYpA0BzBQdms/pj38vIPSvjtnHpFggwztNKKkEaIPy49kFOBIVhJTv
> e8h3z5PoXre9r1cZ5ay3zTs23vc7GLGqphrRLguwsUvYa1cY1T4vQWY4dommpM/0
> VHLUhp8oNtokqqzUSYMd8FTF+55rzSuBN+Y+UEFUHakZ9QXOnvwXfAJk6CwQdTHn
> YCGNKGY24qpYeJkfEq3e2QQC+WNDd1pqLCBENpD1uCpmejctHO4mVaO3032Gxd5/
> FCVGiBgV+SW7h0jUEr3pk7CnUigBwMGy9UT/QuDP9N2ID7tAbfbmrr0zJ8hkLmR8
> 0xFGyaMK2jJx9C+DDjzbCw4lrKfWGkvjHRR6MPJ5QUcKWiji8xh8TCSlNZOxCq43
> Mt7aMjZbWJhlIH15F8wSCrKFOAWHRud35asHJqPFZhRFJvA5Ly8Yy5cVcb4hboZj
> bkaZwfABTvGLO0SEFb1T
> =xRdB
> -----END PGP SIGNATURE-----
>   


-- 
Michelle Sullivan
http://www.mhix.org/



More information about the freebsd-fs mailing list