ZFS RAID 10 capacity expansion and uneven data distribution

krad kraduk at gmail.com
Mon May 18 07:41:32 UTC 2015


depending on your dataset and you could also break it down into the file
level rather than mess around with zfs send etc

eg

cp some_file some_file.new
rm some_file
mv some_file.new some_file


just be careful with permissions etc (you might need a flag or to extra)

On 14 May 2015 at 14:59, Daniel Kalchev <daniel at digsys.bg> wrote:

> Not a total bs, but.. it could be made simpler/safer.
>
> skip 2,3,4 and 5
> 7a. zfs snapshot -r zpool.old at send
> 7b. zfs send -R zpool.old at send | zfs receive -F zpool
> do not skip 8 :)
> 11. zpool attach zpool da1 da2 && zpool attach zpool da3 da4
>
> Everywhere in the instruction where it says daX replace with gpt/zpool-daX
> as in the original config.
>
> After this operation, you should have the exact same zpool, with evenly
> redistributed data. You could use the chance to change ashift etc. Sadly,
> this works only for mirrors.
>
> Important to understand that since the first step you have an
> non-redundant pool. It’s very reasonable to do a scrub before starting this
> process and of course have usable backup.
>
> Daniel
>
> > On 14.05.2015 г., at 16:42, Gabor Radnai <gabor.radnai at gmail.com> wrote:
> >
> > Hi Kai,
> >
> > As others pointed out the cleanest way is to destroy / recreate your pool
> > from backup.
> >
> > Though if you have no backup a hackish, in-place recreation process can
> be
> > the following.
> > But please be *WARNED* it is your data, the recommended solution is to
> use
> > backup,
> > if you follow below process it is your call - it may work but I cannot
> > guarantee. You can
> > have power outage, disk outage, sky falling down, whatever and you may
> lose
> > your data.
> > And this may not even work - more skilled readers could bit me on head
> how
> > stupid this is.
> >
> > So, again be warned.
> >
> > If you are still interested:
> >
> >> On one server I am currently using a four disk RAID 10 zpool:
> >>
> >>      zpool              ONLINE       0     0     0
> >>        mirror-0         ONLINE       0     0     0
> >>          gpt/zpool-da2  ONLINE       0     0     0
> >>          gpt/zpool-da3  ONLINE       0     0     0
> >>        mirror-1         ONLINE       0     0     0
> >>          gpt/zpool-da4  ONLINE       0     0     0
> >>          gpt/zpool-da5  ONLINE       0     0     0
> >
> >
> > 1. zpool split zpool zpool.old
> > this will leave your current zpool composed from slice of da2 and da4,
> and
> > create a new pool from da3 and da5.
> > 2. zpool destroy zpool
> > 3. truncate -s <proper size> /tmp/dummy.1 && truncate -s <proper size>
> > /tmp/dummy.2
> > 4. zpool create <flags> zpool mirror da2 /tmp/dummy.1 mirror da4
> > /tmp/dummy.2
> > 5. zpool zpool offline /tmp/dummy.1 & zpool offline /tmp/dummy.2
> > 6. zpool import zpool.old
> > 7. (zfs create ... on zpool as needed) copy your stuff from zpool.old to
> > zpool
> > 8. cross your fingers, *no* return from here !!
> > 9. zpool destroy zpool.old
> > 10. zpool labelclear da3 && zpool labelclear da5 # just to be on clear
> side
> > 11. zpool replace zpool /tmp/dummy.1 da3 && zpool replace zpool
> > /tmp/dummy.2 da5
> > 12. wait for resilver ...
> >
> > If this is total sh*t please ignore, i tried it in VM seemed to work.
> >
> > Thanks.
> > _______________________________________________
> > freebsd-fs at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


More information about the freebsd-fs mailing list