mirror vdevs with different sizes

From: John Doherty <bsdlists_at_jld3.net>
Date: Fri, 25 Mar 2022 15:16:13 UTC
Hello, I have an existing zpool with 12 mirrors of 8 TB disks. It is 
currently about 60% full and we expect to fill the remaining space 
fairly quickly.

I would like to expand it, preferably using 12 mirrors of 16 TB disks. 
Any reason I shouldn't do this?

Using plain files created with truncate(1) like these:

[root@ibex] # ls -lh /vd/vd*
-rw-r--r--  1 root  wheel   8.0G Mar 25 08:49 /vd/vd0
-rw-r--r--  1 root  wheel   8.0G Mar 25 08:49 /vd/vd1
-rw-r--r--  1 root  wheel    16G Mar 25 08:49 /vd/vd2
-rw-r--r--  1 root  wheel    16G Mar 25 08:49 /vd/vd3

I can first do this:

[root@ibex] # zpool create ztest mirror /vd/vd{0,1}
[root@ibex] # zpool list ztest
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    
HEALTH  ALTROOT
ztest  7.50G   384K  7.50G        -         -     0%     0%  1.00x    
ONLINE  -

And then do this:

[root@ibex] # zpool add ztest mirror /vd/vd{2,3}
[root@ibex] # zpool list ztest
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    
HEALTH  ALTROOT
ztest    23G   528K  23.0G        -         -     0%     0%  1.00x    
ONLINE  -

And FWIW, everything works as expected. But I've never constructed a 
real zpool with vdevs of different sizes and I don't know whether there 
might be any expected problems.

I could just create a new zpool with new disks, but most of the existing 
data and most of the expected new data is in just two file systems and 
for simplicity's sake from the perspective of those users, it would be 
nicer to just make the existing file systems larger than to give them 
access to a new, different one.

Any comments, suggestions, warnings, etc. much appreciated. Thanks.