ZFS RAID 10 capacity expansion and uneven data distribution

Kai Gallasch k at free.de
Sun May 17 09:17:43 UTC 2015


On 14.05.2015 15:59 Daniel Kalchev wrote:
> Not a total bs, but.. it could be made simpler/safer.
> 
> skip 2,3,4 and 5
> 7a. zfs snapshot -r zpool.old at send
> 7b. zfs send -R zpool.old at send | zfs receive -F zpool
> do not skip 8 :)
> 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4

Somehow nifty. I tried this on a test server and found out, that after
the zpool split it is safer to do the import with e.g."-o altroot=/mnt",
because if you are doing this on a root pool the import will be mounted
over the existing root fs and the zfs installation becomes unusable at
this point. (-> reboot)

Also the zfs receive should not mount the received Filesystem.

> After this operation, you should have the exact same zpool, with evenly redistributed data. You could use the chance to change ashift etc. Sadly, this works only for mirrors.

In my case this is not true. After completion, data is still not evenly
distributed across the mirror pairs and each pair has a differing FRAG
value. (Before doing the zfs send I destroyed the old z filesystems on
the receiving side..) Although when accessing the data afterwards the
situation with one mirror pair being overused and the other almost idle
has become better - so this method fixes the problem a bit..

When I recreate the pool and restore the data the picture looks
different: Data then is really equal size across the mirrors and they
all have the same FRAG value - as expected.

My conclusion: Expanding a RAID 10 zpool by adding mirrored vdevs is not
really an option if you also want to benefit from the gained IOPS of the
new devices. In this case recreating the pool is the cleanest solution.
If you cannot recreate the pool you can think about this zpool split
hack to reditribute data across all vdevs - althoug you temporarly loose
your pool redundancy between zpool split and the end of the resilvering
process (risky)

Kai.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20150517/cc94775f/attachment.sig>


More information about the freebsd-fs mailing list