Question on gmirror and zfs fs behavior in unusual setup

Miroslav Lachman 000.fbsd at quip.cz
Mon Jan 11 12:16:15 UTC 2016


Matt Churchyard via freebsd-fs wrote on 01/11/2016 13:07:
>> I currently have several storage servers. For historical reasons they have 6x 1TB Western Digital Black SATA drives in each server. Configuration is >as follows:
>
>> GPT disk config with boot sector
>> /dev/ada0p1 freebsd-boot 64k
>> /dev/ada0p2 freebsd-swap 1G
>> /dev/ada0p3 freebsd-ufs 30G
>> /dev/ada0p4 freebsd-zfs rest of drive
>
>> The drive names are ada0 through ada5.
>
>> The six drives all have the same partition scheme.
>> - They are all bootable
>> - Each swap has a label from swap0 through swap5 which all mount on boot
>> - The UFS partitions are all in mirror/rootfs mirrored using gmirror in a 6 way mirror (The goal of the boot and mirror redundancy is any drive can >die and I can still boot off any other drive like nothing happened. This partition contains the entire OS.
>> - The zfs partitions are in RAIDZ-2 configuration and are redundant automatically. They contain the network accessible storage data.
>
>> My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I have replaced drive ada5 as a test. I used the -a 4k command while >partitioning to make sure sector alignment is correct. There are two major
>> changes:
>
>> - ada5p3 is now 100 G
>> - ada5p4 is now much larger due to the size of the drive
>
>> My understanding is that zfs will automatically change the total volume size once all drives are upgraded to the new 5 TB drives. Please correct >me if I'm wrong! The resilver went without a hitch.
>
> You may have to run "zpool online -e pool" once all the disk have been replaced, but yes it should be fairly easy to get ZFS to pick up the new space.
>
> The only other issue you may see is that if you built the original pool with 512b sectors (ashift 9) you may find "zpool status" start complaining that you are configured for 512b sectors when your disks are 4k (I haven't checked but considering the size I expect those 5TB disks are 4k). If that happens you either have to live with the warning or rebuild the pool.
>
>> My concern is with gmirror. Will gmirror grow to fit the new 100 G size automatically once the last drive is replaced? I got no errors using insert >with the 100 G partition into the mix with the other 5 30 G partitions. It synchronized fine. The volume shows as complete and all providers are >healthy.
>
> A quick test suggests you'll need to run "gmirror resize provider" once all the disks are replaced to get gmirror to update the size stored in the metadata -

Good point. I didn't know about "gmirror resize". It was not in FreeBSD 
8.4 - the last time I play with replacing by bigger disks.

Thank you

Miroslav Lachman


More information about the freebsd-fs mailing list