Question on gmirror and zfs fs behavior in unusual setup

Steven Hartland killing at multiplay.co.uk
Mon Jan 11 11:45:05 UTC 2016



On 11/01/2016 11:18, Octavian Hornoiu wrote:
> I currently have several storage servers. For historical reasons they have
> 6x 1TB Western Digital Black SATA drives in each server. Configuration is
> as follows:
>
> GPT disk config with boot sector
> /dev/ada0p1 freebsd-boot 64k
> /dev/ada0p2 freebsd-swap 1G
> /dev/ada0p3 freebsd-ufs 30G
> /dev/ada0p4 freebsd-zfs rest of drive
>
> The drive names are ada0 through ada5.
>
> The six drives all have the same partition scheme.
> - They are all bootable
> - Each swap has a label from swap0 through swap5 which all mount on boot
> - The UFS partitions are all in mirror/rootfs mirrored using gmirror in a 6
> way mirror (The goal of the boot and mirror redundancy is any drive can die
> and I can still boot off any other drive like nothing happened. This
> partition contains the entire OS.
> - The zfs partitions are in RAIDZ-2 configuration and are redundant
> automatically. They contain the network accessible storage data.
>
> My dilemma is this. I am upgrading to 5 TB Western Digital Black drives. I
> have replaced drive ada5 as a test. I used the -a 4k command while
> partitioning to make sure sector alignment is correct. There are two major
> changes:
>
> - ada5p3 is now 100 G
> - ada5p4 is now much larger due to the size of the drive
>
> My understanding is that zfs will automatically change the total volume
> size once all drives are upgraded to the new 5 TB drives. Please correct me
> if I'm wrong! The resilver went without a hitch.
Correct you just need to ensure that autoexpand is enabled on the pool e.g.
zpool set autoexpand=on tank
> My concern is with gmirror. Will gmirror grow to fit the new 100 G size
> automatically once the last drive is replaced? I got no errors using insert
> with the 100 G partition into the mix with the other 5 30 G partitions. It
> synchronized fine. The volume shows as complete and all providers are
> healthy.
I'm not sure with gmirror 100% but the following seems to detail what 
you want:
https://lists.freebsd.org/pipermail/freebsd-questions/2007-August/156466.html

>
> Anyone with knowledge of gmirror and zfs replication able to confirm that
> they'll grow automatically once all 6 drives are replaced or do I have to
> sync them at existing size and do some growfs trick later?
>
> Thanks!
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list