zfs raid1 error resilvering and mount

Fleuriot Damien ml at my.gd
Tue Feb 19 13:27:04 UTC 2013


Well I can't see anything else to help you, except trying to replace your failed vdev and resilver from there…



On Feb 19, 2013, at 2:24 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:

> zfs set canmount=off zroot/var/crash
> 
> i can`t do this, because zfs list empty
> 
> 2013/2/19 Fleuriot Damien <ml at my.gd>:
>> The thing is, perhaps you have corrupted blocks that weren't caught either by ZFS or your drives' firmware, preventing the pool's operation.
>> 
>> Seeing zroot/var/crash is the problem, could you try:
>> 
>> 1/ booting from a live CD or flash
>> 2/ NOT start a resilver
>> 3/ run the command:
>> zfs set canmount=off zroot/var/crash
>> 
>> 
>> This should prevent /var/crash from trying to be mounted from the ZFS pool.
>> 
>> Perhaps this'll allow you to get further through the boot process and perhaps even start your ZFS pool correctly.
>> 
>> 
>> 
>> On Feb 19, 2013, at 12:52 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:
>> 
>>> you understand me right, but my problem not in dead device... raid1
>>> must work correctly with 1 device and command to replace or something
>>> else not work, just freeze
>>> i have only 2 warning about crash fs zroot/var/crash and thats all
>>> have any idea, how i can repair it without default zfs tools like zfs, zpool?
>>> 
>>> 
>>> 2013/2/19 Fleuriot Damien <ml at my.gd>:
>>>> If I understand you correctly, you have:
>>>> - booted another system from flash
>>>> - NOT replaced the failed device
>>>> - under this booted system, resilvering takes place automatically
>>>> 
>>>> 
>>>> While I cannot tell why ZFS tries to resilver without a new, proper device, I think it will only work once you've replaced the failed device.
>>>> 
>>>> Could you try replacing the failed drive ?
>>>> 
>>>> 
>>>> On Feb 19, 2013, at 12:39 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:
>>>> 
>>>>> i did`t replace disk, after reboot system not started (zfs installed
>>>>> as default root system) and i boot from another system(from flash) and
>>>>> resilvering has auto start and show me warnings with freeze
>>>>> progress(dead on checking zroot/var/crash )
>>>>> replacing dead disk healing var/crash with <0x0> adress?
>>>>> 
>>>>> 2013/2/18 Fleuriot Damien <ml at my.gd>:
>>>>>> Reassure me here, you've replaced your failed vdev before trying to resilver right ?
>>>>>> 
>>>>>> Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive.
>>>>>> 
>>>>>> 
>>>>>> On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:
>>>>>> 
>>>>>>> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs
>>>>>>> list empty
>>>>>>> 
>>>>>>> 2013/2/17 Fleuriot Damien <ml at my.gd>:
>>>>>>>> Hmmm, zfs destroy -f zroot/var/crash ?
>>>>>>>> 
>>>>>>>> Then you can try to zfs mount -a
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Removing pjd and mm from cc, if they want to read your message they're old enough to check their ML subscription.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:
>>>>>>>> 
>>>>>>>>> hi, i have raid1 on zfs with 2 device on pool
>>>>>>>>> first device died and boot from second not working...
>>>>>>>>> 
>>>>>>>>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import
>>>>>>>>> http://puu.sh/2402E
>>>>>>>>> 
>>>>>>>>> when  i load zfs.ko and opensolaris.ko i see this message:
>>>>>>>>> Solaris: WARNING: Can't open objset for zroot/var/crash
>>>>>>>>> Solaris: WARNING: Can't open objset for zroot/var/crash
>>>>>>>>> 
>>>>>>>>> zpool status:
>>>>>>>>> http://puu.sh/2405f
>>>>>>>>> 
>>>>>>>>> resilvering freeze with:
>>>>>>>>> zpool status -v
>>>>>>>>>    .............
>>>>>>>>>    zroot/usr:<0x28ff>
>>>>>>>>>    zroot/usr:<0x29ff>
>>>>>>>>>    zroot/usr:<0x2aff>
>>>>>>>>>    zroot/var/crash:<0x0>
>>>>>>>>> root at Flash:/root #
>>>>>>>>> 
>>>>>>>>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t
>>>>>>>>> remember) and mount other zfs points with my data
>>>>>>>>> --
>>>>>>>>> С уважением
>>>>>>>>> Куклин Константин.
>>>>>>>>> _______________________________________________
>>>>>>>>> freebsd-fs at freebsd.org mailing list
>>>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> С уважением
>>>>>>> Куклин Константин.
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> С уважением
>>>>> Куклин Константин.
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> С уважением
>>> Куклин Константин.
>> 
> 
> 
> 
> --
> С уважением
> Куклин Константин.



More information about the freebsd-fs mailing list