ZFS: i/o error - all block copies unavailable

Dan Langille dan at langille.org
Mon Feb 22 18:03:27 UTC 2016


> On Feb 22, 2016, at 12:58 PM, Matthew Seaman <matthew at FreeBSD.org> wrote:
> 
> On 2016/02/22 17:41, Dan Langille wrote:
>> I have a FreeBSD 10.2 (with freebsd-update applied) system at home which cannot boot. The message is:
>> 
>> ZFS: i/o error - all block copies unavailable
>> ZFS: can't read MOS of pool system
>> gptzfsboot: failed to mount default pool system
> 
> This always used to indicate problems with /boot/zfs/zpool.cache being
> inconsistent.  However, my understanding is that ZFS should be able to
> cope with an inonsistent zpool.cache nowadays.
> 
> The trick there was to boot from some other media, export the pool and
> then import it again.
> 
>> The screen shot is https://twitter.com/DLangille/status/701611716614946816
>> 
>> The zpool name is 'system'.
>> 
>> I booted the box via mfsBSD thumb drive, and was able to import the zpool: https://gist.github.com/dlangille/6da065e309301196b9cd <https://gist.github.com/dlangille/6da065e309301196b9cd>
> 
> ... which means all the zpool.cache stuff above isn't going to help.
> 
>> I have also run: "gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 XXX" against each drive. I did with the the files
>> provided with mfsBSD and with the files from my local 10.2 system.  Neither solution changed the booting problem.
>> 
>> Ideas?  Suggestions?
> 
> Is this mirrored or RAIDZx?  If it's mirrored, you might be able to:
> 
>  - split your existing zpool (leaves it without redundancy)
>  - on the half of your drives removed from the existing zpool,
>    create a new zpool (again, without redundancy)
>  - do a zfs send | zfs receive to copy all your data into the
>    new zpool
>  - boot from the new zpool
>  - deconfigure the old zpool, and add the drives to the new zpool
>    to make it fully redundant again
>  - wait for lots of resilvering to complete
> 
> However, this really only works if the pool is mirrored throughout.
> RAIDZ users will be out of luck.

It is raidz2.  There is a zpool status here: http://dan.langille.org/2013/08/18/knew/ <http://dan.langille.org/2013/08/18/knew/>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 971 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.freebsd.org/pipermail/freebsd-questions/attachments/20160222/771a64a8/attachment.sig>


More information about the freebsd-questions mailing list