Question on gmirror and zfs fs behavior in unusual setup

Miroslav Lachman 000.fbsd at quip.cz
Mon Jan 11 11:59:03 UTC 2016


Octavian Hornoiu wrote on 01/11/2016 12:18:
> I currently have several storage servers. For historical reasons they have
> 6x 1TB Western Digital Black SATA drives in each server. Configuration is
> as follows:
>
> GPT disk config with boot sector
> /dev/ada0p1 freebsd-boot 64k
> /dev/ada0p2 freebsd-swap 1G
> /dev/ada0p3 freebsd-ufs 30G
> /dev/ada0p4 freebsd-zfs rest of drive
>
> The drive names are ada0 through ada5.
>
> The six drives all have the same partition scheme.
> - They are all bootable
> - Each swap has a label from swap0 through swap5 which all mount on boot
> - The UFS partitions are all in mirror/rootfs mirrored using gmirror in a 6
> way mirror (The goal of the boot and mirror redundancy is any drive can die
> and I can still boot off any other drive like nothing happened. This
> partition contains the entire OS.
> - The zfs partitions are in RAIDZ-2 configuration and are redundant
> automatically. They contain the network accessible storage data.
>
> My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I
> have replaced drive ada5 as a test. I used the -a 4k command while
> partitioning to make sure sector alignment is correct. There are two major
> changes:
>
> - ada5p3 is now 100 G
> - ada5p4 is now much larger due to the size of the drive
>
> My understanding is that zfs will automatically change the total volume
> size once all drives are upgraded to the new 5 TB drives. Please correct me
> if I'm wrong! The resilver went without a hitch.
>
> My concern is with gmirror. Will gmirror grow to fit the new 100 G size
> automatically once the last drive is replaced? I got no errors using insert
> with the 100 G partition into the mix with the other 5 30 G partitions. It
> synchronized fine. The volume shows as complete and all providers are
> healthy.

No gmirror will not expand automatically nor manually. You will have 
30GB mirror and 70GB of unused space.
If you really need to expand this mirror, you need to create 100GB 
partition, format with uFS, copy data from old mirror, destroy old 
mirror and create new bigger from this 100GB partition.

> Anyone with knowledge of gmirror and zfs replication able to confirm that
> they'll grow automatically once all 6 drives are replaced or do I have to
> sync them at existing size and do some growfs trick later?

If your pool has autoexpand=on. Maybe reboot or zpool offline / online 
is needed. I don't remember it well.

Miroslav Lachman


More information about the freebsd-fs mailing list