ZFS v28 array doesn't expand with larger disks in mirror

Kurt Touet ktouet at gmail.com
Thu Jun 30 19:27:53 UTC 2011


#zpool online -e storage ad20
#zpool online -e storage ad18

storage     6.33T   958G
  raidz1    5.10T   363G
  mirror    1.23T   595G

Worked like a charm!

Thanks for the help,
-kurt


On Thu, Jun 30, 2011 at 9:14 AM, Artem Belevich <art at freebsd.org> wrote:
> On Thu, Jun 30, 2011 at 12:54 AM, Kurt Touet <ktouet at gmail.com> wrote:
>> Thanks for that info Artem.  I have now set that property,
>> exported/imported, and rebooted to no avail.  Is this something that
>> needed to be set ahead of time?
>
> I guess autoexpand property only matter on disk change and does not
> work retroactively. Try "zpool online -e"
>
> --Artem
>
>>
>> Thanks,
>> -kurt
>>
>> On Thu, Jun 30, 2011 at 1:03 AM, Artem Belevich <art at freebsd.org> wrote:
>>> On Wed, Jun 29, 2011 at 11:29 PM, Kurt Touet <ktouet at gmail.com> wrote:
>>>> I have an admittedly odd zfs v28 array configuration under stable/8 r223484:
>>>>
>>>> # zpool status storage
>>>>  pool: storage
>>>>  state: ONLINE
>>>>  scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:46 2011
>>>> config:
>>>>
>>>>        NAME        STATE     READ WRITE CKSUM
>>>>        storage     ONLINE       0     0     0
>>>>          raidz1-0  ONLINE       0     0     0
>>>>            ad14    ONLINE       0     0     0
>>>>            ad6     ONLINE       0     0     0
>>>>            ad12    ONLINE       0     0     0
>>>>            ad4     ONLINE       0     0     0
>>>>          mirror-1  ONLINE       0     0     0
>>>>            ad20    ONLINE       0     0     0
>>>>            ad18    ONLINE       0     0     0
>>>>
>>>> This was simply due to the need to expand the size of the original
>>>> raidz1 only array and constraints within the box.  All drives in the
>>>> box _were_ 1.5TB.  I had a drive in the mirror die this week, and I
>>>> had 2 spare 2TB drives on hand.  So, I decided to replace both of the
>>>> 1.5TB drives in the array with 2TB drives (and free up a little more
>>>> space on the box).  However, after replacing both drives, the array
>>>> did not expand in size.  It still acts as if the mirror contains 1.5TB
>>>> drives:
>>>>
>>>> storage     6.28T   548G
>>>>  raidz1    5.07T   399G
>>>>  mirror    1.21T   150G
>>>>
>>>> Is this normal behaviour?  It was my understanding that zfs
>>>> automatically adapted to having additional drive space in vdevs.
>>>
>>> You still have to set 'autoexpand' property on the pool in order for
>>> expansion to happen. Perevious versions would expand the pool on
>>> re-import or on boot.
>>>
>>> --Artem
>>>
>>>>
>>>> -kurt
>>>> _______________________________________________
>>>> freebsd-fs at freebsd.org mailing list
>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>>
>>>
>>
>


More information about the freebsd-fs mailing list