Removing an accidentally incorrect vdev from a ZFS pool
Steven Schlansker
stevenschlansker at berkeley.edu
Thu Jul 12 23:08:43 UTC 2007
Hello everyone,
I've been using ZFS on -CURRENT for a few weeks now with quite decent
results. I'm really glad that this apparently awesome filesystem is
available, and want to register my thanks for everyone who put work into
it :)
That being said, I've just run into a little snag. I wanted to extend
my zpool from 3 drives to 6 drives.
My first hope was that I could automagically extend the RAIDZ. My hope
was to extend the vdev to be a raidz1 with 6 devices, and then restripe
it to raidz2. That would have been pretty neat, and a great way to
manage drives. After reading the man pages and not finding that
functionality, I found many mailing list posts with everyone chatting
about exactly that. I hope the functionality finds its way into 7.0.
I'd add it myself, but I tried to read the ZFS code and a good
percentage off it flew over my non-filesystem-geek programmer's head :)
So then I settled on the next-best thing - adding another raidz1 vdev
with the three new drives. Not quite as flexible, but still adequate.
I checked the dmesg to find my drives - ad12, ad16, ad19. I ran
sudo zpool add universe raidz1 ad12 ad16 ad19, and then checked zfs list
to find my brand new double-sized zfs.... huh? It only grew by 40G?
That's strange. Maybe the zpool grew... nope. About here is where I
started to panic a bit. Checked the dmesg... oops! it was ad18, not 19...
Anyway, my questions now are:
Am I correct in concluding there is no way to reshape a raidz1 to a
raidz2? Is this functionality planned?
I now need to remove this broken vdev from my array. I haven't added
any data, so there shouldn't even be any data at all on it. However all
the remove/delete options to zpool seem to exclusively work on mirrors
and hot spares. I really need to get this vdev off the zfs - it's
entirely useless. How can I do that? I've already taken out the
accidental drive - I want to try to recover the old filesystem off of
it. Lucky it wasn't too important, though it'd be nice to have. Now i
have an array stuck permanently degraded.
On a related note, perhaps zpool should do a bit of sanity checking. I
know the Linux md tools require you to 'force' array creation if the
drives differ by +/- 5% or thereabouts. I just created a 400G/400G/40G
raidz, which is a totally stupid array layout. Maybe it should try to
catch it. (Yes, I should have caught it, but it's easy to miss the
extra digit when you see 4xxxxx and 4xxxx next to each other, especially
when they don't line up)
Thank you so much for any advice anyone can offer!
Steven Schlansker
More information about the freebsd-current
mailing list