ZFS: i/o error all block copies unavailable Invalid format
peter.maloney at brockmann-consult.de
Tue Dec 6 07:36:55 UTC 2011
Am 06.12.2011 07:14, schrieb KOT MATPOCKuH:
> Hello all!
> On 24 nov I updated sources via csup to RELENG_9 (9.0-PRERELEASE).
> After make installboot I successfully booted to single user.
> But after make installworld system was fail to boot with message:
> ZFS: i/o error all block copies unavailable
> Invalid format
> status command shows status of all polls properly.
> root filesystem is not compressed.
> # zfsboottest /dev/gpt/rootdisk /dev/gpt/rootmirr
> pool: sunway
> NAME STATE
> sunway ONLINE
> mirror ONLINE
> gpt/rootdisk ONLINE
> gpt/rootmirr ONLINE
> Restore of old /boot/zfsloader was solved issue.
> Before I successfully updated 4 another systems with same sources
> level without any problems.
> My sys/boot/zfs/zfsimpl.c's version: 22.214.171.124 2011/11/19 10:49:03
> Where may a root cause of problem? And how I can debug this problem?
"Invalid format" sounds like the software doesn't understand the disks.
Check your pool (software) version with:
# zpool upgrade -v
Check your pool (on disk) version with (I forget the exact command):
# zpool get version sunway
My guess is that you installed the latest zfs on the pool, but left the
old version of the bootloader.
To fix an unbootable zfs root where the disks are working fine or
degrade, this is the general procedure. I don't know if it applies to
your particular problem, but I am optimistic.
In this example, I copied a usb disk called zrootusb to one called
Import the pool using altroot and cachefile.
# zpool import -o altroot=/z -o cachefile=/tmp/zpool.cache zrootusbcopy
Set mount points (/ is fine, don't need legacy... legacy is a hassle,
needing to set it to / and back after umount every time you repair things)
Since altroot is /z, the root will be at /z/; do not prepend /z in
# zfs list | grep zrootusbcopy
# zfs set mountpoint=/ zrootusbcopy
(if you were copying a disk and wanted it to be bootable, this is the
point when you would snapshot and zfs send, where the above is the newly
created bootable copy)
Make sure bootfs is set.
zfs get bootfs zrootusbcopy
zfs set bootfs=zrootusbcopy zrootusbcopy
**Copy the cache file to the new pool's /boot/zfs
cp /tmp/zpool.cache /z/boot/zfs/zpool.cache
Verify that the /boot/loader.conf is correct (pool name), and zfs_load
If this is your only zfs:
# zfs umount -a
otherwise one at a time:
# zfs umount zrootusbcopy/var/empty
# zfs umount zrootusbcopy/usr/
or a script (bash, untested):
for name in $(zfs list -H -o name | grep -E "^zrootusbcopy/"); do
zfs umount $name
zfs umount zrootusbcopy
install bootloader (possibly the only step you actually needed).
1. Figure out what disks and partition number to put it on... I use:
2. Install. If it is a mirror, do 2 of these commands with different
gpart bootcode -b /z/boot/pmbr -p /z/boot/gptzfsboot -i
Then do not export.
Then reboot; try to boot your previously unbootable zfs root system.
Here is a thread where I suggested this method to someone and it worked
for him, although his error message was different.
More information about the freebsd-current