ZFS RAID 10 capacity expansion and uneven data distribution

Gabor Radnai gabor.radnai at gmail.com
Thu May 14 13:42:28 UTC 2015


Hi Kai,

As others pointed out the cleanest way is to destroy / recreate your pool
from backup.

Though if you have no backup a hackish, in-place recreation process can be
the following.
But please be *WARNED* it is your data, the recommended solution is to use
backup,
if you follow below process it is your call - it may work but I cannot
guarantee. You can
have power outage, disk outage, sky falling down, whatever and you may lose
your data.
And this may not even work - more skilled readers could bit me on head how
stupid this is.

So, again be warned.

If you are still interested:

> On one server I am currently using a four disk RAID 10 zpool:
>
>	zpool              ONLINE       0     0     0
>	  mirror-0         ONLINE       0     0     0
>	    gpt/zpool-da2  ONLINE       0     0     0
>	    gpt/zpool-da3  ONLINE       0     0     0
>	  mirror-1         ONLINE       0     0     0
>	    gpt/zpool-da4  ONLINE       0     0     0
>	    gpt/zpool-da5  ONLINE       0     0     0


1. zpool split zpool zpool.old
this will leave your current zpool composed from slice of da2 and da4, and
create a new pool from da3 and da5.
2. zpool destroy zpool
3. truncate -s <proper size> /tmp/dummy.1 && truncate -s <proper size>
/tmp/dummy.2
4. zpool create <flags> zpool mirror da2 /tmp/dummy.1 mirror da4
/tmp/dummy.2
5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2
6. zpool import zpool.old
7. (zfs create ... on zpool as needed) copy your stuff from zpool.old to
zpool
8. cross your fingers, *no* return from here !!
9. zpool destroy zpool.old
10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear side
11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool
/tmp/dummy.2 da5
12. wait for resilver ...

If this is total sh*t please ignore, i tried it in VM seemed to work.

Thanks.


More information about the freebsd-fs mailing list