ZFS pool restructuring and emergency repair

Adam McDougall mcdouga9 at egr.msu.edu
Sat Jun 20 05:21:10 UTC 2015


On 06/19/2015 21:24, Quartz wrote:
> I'm wondering if anyone can help me clear up a few questions and
> concerns I have about ZFS.
> 
> It seems to me that ZFS is really not terribly flexible when it comes to
> changing a pool's structure after the fact, and once you set something
> up you're pretty much stuck with it, making future administration and
> repairs complicated. To be fair, I'm not really clear on what all the
> available tools can do and what the options are- I haven't really been
> keeping up with ZFS development over the past few years so I'm not sure
> how much of my knowledge is out of date.
> 
> What are people's responses and recommendations given the following
> hypothetical situations:
> 
> - A server is set up with a pool created a certain way. A couple years
> later it's determined that the pool configuration wasn't a good choice
> for the workload and it should be redone. As I understand it, ZFS has no
> capability to reorganize, remove, or re-type vdevs, so your only option
> is completely starting over with another whole pool. Is this still true?
> If so, is there a correct way to copy an entire pool to another set of
> disks in a way that preserves all the metadata and hierarchical dataset
> information? (snapshots, noatime, compression, dedupe, quotas,
> mountpoints, etc). It looks like 'send' and 'receive' might do it, but
> I'm having trouble finding detailed information on exactly what they
> copy, how much of a skeleton on the receiving end I need to manually
> create first, and what breaks if we have a root-on-ZFS setup.

The manpage for zfs says: (under zfs send)

-R      Generate a replication stream package, which will replicate
        the specified filesystem, and all descendent file systems, up
        to the named snapshot. When received, all properties, snap‐
        shots, descendent file systems, and clones are preserved.

> 
> 
> - A server is set up with a pool created a certain way, for the sake of
> argument let's say it's a raidz-2 comprised of 6x 2TB disks. There's
> only actually ~1TB of data currently on the server though. Let's say
> there's a catastrophic emergency where one of the disks needs to be
> replaced, but the only available spare is an old 500GB. As I understand
> it, you're basically SOL. Even though a 6x500 (really 4x500) is more
> than enough to hold 1Tb of data, you can't do anything in this situation
> since although ZFS can expand a pool to fit larger disks, it can't
> shrink one under any circumstance. Is my understanding still correct or
> is there a way around this issue now?

man gvirstor which lets you create an arbitrarily large storage device
backed by chunks of storage based on how much you are actually using.  I
have not used it.


More information about the freebsd-fs mailing list