ZFS raidz1 pool unavailable from losing 1 device

Ludwig Pummer ludwigp at chip-web.com
Thu Jul 30 05:39:02 UTC 2009


Hello,

I found myself with a 4-drive raidz1 pool that was put into the UNAVAIL 
state ("insufficient replicas") when 3 drives shows ONLINE and 1 showed 
UNAVAIL. Can anyone suggest how I can get out of this pickle?

Here's the full backstory:
My system is 7.2-STABLE Jul 27, amd64, 4GB memory, just upgraded from 
6.4-STABLE from last year. I just set up a ZFS raidz volume to replace a 
graid5 volume I had been using. I had it successfully set up using 
partitions across 4 disks, ad{6,8,10,12}s1e. Then I wanted to expand the 
raidz volume by merging the space from the adjacent disk partition. I 
thought I could just fail out the partition device in ZFS, edit the 
bsdlabel, and re-add the larger partition, ZFS would resilver, repeat 
until done. That's when I found out that ZFS doesn't let you fail out a 
device in a raidz volume. No big deal, I thought, I'll just go to single 
user mode and mess with the partition when ZFS isn't looking. When it 
comes back up it should notice that one of the device is gone, I can do 
a 'zfs replace' and continue my plan.

Well, after rebooting to single user mode, combining partitions ad12s1d 
and ad12s1e (removed the d partiton), "zfs volinit", then "zpool status" 
just hung (Ctrl-C didn't kill it, so I rebooted). I thought this was a 
bit odd so I thought perhaps ZFS is confused by the ZFS metadata left on 
ad12s1e, so I blanked it out with "dd". That didn't help. I changed the 
name of the partition to ad12s1d thinking perhaps that would help. After 
that, "zfs volinit; zfs mount -a; zpool status" showed my raidz pool 
UNAVAIL with the message "insufficient replicas", ad{6,8,10}s1e ONLINE, 
and ad12s1e UNAVAIL "cannot open", and a more detailed message pointing 
me to http://www.sun.com/msg/ZFS-8000-3C. I tried doing a "zpool replace 
storage ad12s1e ad12s1d" but it refused, saying my zpool ("storage") was 
unavailable. Ditto for pretty much every zpool command I tried. "zpool 
clear" gave me a "permission denied" error.

After some more searching of forums/mailing lists, I ran across one that 
suggested exporting & importing the zpool. I'm afraid that didn't fix my 
problem. The export worked, but now I cannot import the volume again 
("cannot import 'storage': pool may be in use from other system", or 
with -f, "cannot import 'storage': one or more devices is currently 
unavailable").

Help!



More information about the freebsd-fs mailing list