ZFS replace/expand problem
Johan Ström
johan at stromnet.se
Sun Dec 30 09:20:19 PST 2007
On Dec 27, 2007, at 20:25 , Johan Ström wrote:
> Hello list
>
> First of all, I want to thank everybody involved in writing and
> porting ZFS to FreeBSD, its working (except for this problem) great
> for me!
>
> Now to my problem. To sumarize it, I want to replace two mirrored
> disk with bigger ones. Replace works well but the vdev doesnt
> expand until i do export/import. Details follows:
>
> I currently have the following setup:
>
> back-1 /$ zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror ONLINE 0 0 0
> ad14s1d ONLINE 0 0 0
> ad16s1d ONLINE 0 0 0
> mirror ONLINE 0 0 0
> ad8 ONLINE 0 0 0
> ad10s2 ONLINE 0 0 0
> mirror ONLINE 0 0 0
> ad12 ONLINE 0 0 0
> ad10s1 ONLINE 0 0 0
>
> The ad8/ad10/ad12 setup is kindof stupid, I know.. ad8 is a 80Gb
> and ad10 is a 120Gb, and a10 200Gb.. But now I want to replace
> those two mirrors with 4x 300GB (or rather 2x300 and 2x320). So my
> plan was to do something like:
>
> zpool replace tank ad8 ad18
> zpool replace tank ad10s2 ad20
>
> where ad18 and ad20 are the two 300Gbs.. Then the same thing for
> ad12 and ad10s1.. But before I did that i wanted to make sure that
> it would actually expand as I'ev read, so i tried this first..
> On ad18/ad20 I had ad*s1a, a 500MB partition, and ad*s1g a ~280Gb
> partition. So i created a testtank with first ad*s1a:
>
> back-1 /$ zpool create testtank mirror /dev/ad18s1a /dev/ad20s1a
> back-1 /$ zpool list
> NAME SIZE USED AVAIL CAP HEALTH
> ALTROOT
> tank 878G 812G 65.1G 92% ONLINE -
> testtank 492M 111K 492M 0% ONLINE -
>
> back-1 /$ zpool status
> ..
> pool: testtank
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> testtank ONLINE 0 0 0
> mirror ONLINE 0 0 0
> ad18s1a ONLINE 0 0 0
> ad20s1a ONLINE 0 0 0
>
> errors: No known data errors
> back-1 /storage$ zpool replace testtank ad18s1a ad18s1g
> status now shows
> mirror ONLINE 0 0 0
> replacing ONLINE 0 0 0
> ad18s1a ONLINE 0 0 0
> ad18s1g ONLINE 0 0 0
> ad20s1a ONLINE 0 0 0
>
> when that was done (and only ad18s1g was showing) i did
>
> back-1 /storage$ zpool replace testtank ad20s1a ad20s1g
>
> and then same replacing output as above (but for ad20)
> Okey, so now when this is done.. it should have expanded one would
> think, right?
>
> back-1 /storage$ zpool list
> NAME SIZE USED AVAIL CAP HEALTH
> ALTROOT
> ..
> testtank 492M 218K 492M 0% ONLINE -
>
>
> Nope.. Waited a while, nothing happened.. Some googling gave me
> that export/import could be done:
>
> back-1 /storage$ zpool export testtank
> back-1 /storage$ zpool import testtank
> back-1 /storage$ zpool list
> NAME SIZE USED AVAIL CAP HEALTH
> ALTROOT
> ..
> testtank 289G 132K 289G 0% ONLINE -
>
> Yey! Okey so it expands, but only after export/import.. Havent
> realy found much docs about this but according to ppl in
> #opensolaris this should not be necessary.
> Not a big deal in this test case, but doing it for my real tank
> will require me to take the system down on an external boot medium
> (CD or something) I guess, and then do zfs export/import there, and
> then boot back up..
> Any guidelines how to do this? Will doing import/export from a CD
> (rescue shell I guess) work as I expect? Or what would be the
> smartest way (the actual downtime isnt such a big deal as long as
> it is quick and works).
>
For the record, I found a somewhat easeier solution.. Just reboot and
it was updated! Tested with my testtank first, reboot, worked. Then
did the same with my real tank (but with ad20 and ad18, not using
slices), and the extra space showed up fine after reboot.
Thanks again for ZFS!
More information about the freebsd-fs
mailing list