ZFS pool restructuring and emergency repair
Quartz
quartz at sneakertech.com
Sat Jun 20 01:24:26 UTC 2015
I'm wondering if anyone can help me clear up a few questions and
concerns I have about ZFS.
It seems to me that ZFS is really not terribly flexible when it comes to
changing a pool's structure after the fact, and once you set something
up you're pretty much stuck with it, making future administration and
repairs complicated. To be fair, I'm not really clear on what all the
available tools can do and what the options are- I haven't really been
keeping up with ZFS development over the past few years so I'm not sure
how much of my knowledge is out of date.
What are people's responses and recommendations given the following
hypothetical situations:
- A server is set up with a pool created a certain way. A couple years
later it's determined that the pool configuration wasn't a good choice
for the workload and it should be redone. As I understand it, ZFS has no
capability to reorganize, remove, or re-type vdevs, so your only option
is completely starting over with another whole pool. Is this still true?
If so, is there a correct way to copy an entire pool to another set of
disks in a way that preserves all the metadata and hierarchical dataset
information? (snapshots, noatime, compression, dedupe, quotas,
mountpoints, etc). It looks like 'send' and 'receive' might do it, but
I'm having trouble finding detailed information on exactly what they
copy, how much of a skeleton on the receiving end I need to manually
create first, and what breaks if we have a root-on-ZFS setup.
- A server is set up with a pool created a certain way, for the sake of
argument let's say it's a raidz-2 comprised of 6x 2TB disks. There's
only actually ~1TB of data currently on the server though. Let's say
there's a catastrophic emergency where one of the disks needs to be
replaced, but the only available spare is an old 500GB. As I understand
it, you're basically SOL. Even though a 6x500 (really 4x500) is more
than enough to hold 1Tb of data, you can't do anything in this situation
since although ZFS can expand a pool to fit larger disks, it can't
shrink one under any circumstance. Is my understanding still correct or
is there a way around this issue now?
More information about the freebsd-fs
mailing list