bsdinstall, zfs booting, gpt partition order suitable for volume expansion

Adam McDougall mcdouga9 at egr.msu.edu
Wed Dec 18 13:53:35 UTC 2013


On Mon, Dec 16, 2013 at 08:18:14PM +0000, Teske, Devin wrote:

  
  On Dec 14, 2013, at 7:44 PM, Teske, Devin wrote:
  
  > 
  > On Dec 10, 2013, at 11:00 AM, Devin Teske wrote:
  > 
  >> 
  >> On Dec 10, 2013, at 9:53 AM, Adam McDougall wrote:
  >> 
  >>> I was wondering if either the default gpt partition order could become
  >>> p1=boot, p2=swap, p3=zpool, or if the installer could be enhanced at
  >>> some point to allow the user to select the order.  It seems like it would
  >>> be easier to expand the size of the raw device (VM, iscsi, etc) and expand
  >>> the zpool if it is the last partition.  I am not in a hurry to get this
  >>> solved, but if a change to the default order is worthwhile, it seems like
  >>> before 10.0 would be a good time to set precedent.  I'm trying to think ahead
  >>> where people will be installing 10 to VMs or expandable volumes so they can
  >>> take advantage of expansion with less hassle.  I pinged Allen Jude on this
  >>> briefly, I think he said it used to be that way but it was changed to
  >>> accomodate MBR partitioning (I think, apologies for not remembering details).
  >> 
  >> Excellent idea. Let me put that into a patch. I'll let you know when I have
  >> something that tests clean.
  > 
  > GPT proved trivial.
  > MBR on the other hand... that proved challenging.
  > 
  > While trying to best that challenge... I uncovered more than a couple nasty bugs
  > while iterating over every possible combination in the installer.
  > 
  > That being said... I'm coming out of the "tunnel" since you sent this e-mail and
  > will soon have something to commit that implements this suggestion while at
  > the same time, plugging a few edge-cases.
  
  Alrighty-then... time to share...
  
  Here's the commit that does what you want...
  
  http://svnweb.freebsd.org/base?view=revision&revision=259476
  
  But just keep in-mind that the whole ball of wax that I "tested to death"
  is actually a combination of the following (in order):
  
  http://svnweb.freebsd.org/base?view=revision&revision=259468
  http://svnweb.freebsd.org/base?view=revision&revision=259469
  http://svnweb.freebsd.org/base?view=revision&revision=259470
  http://svnweb.freebsd.org/base?view=revision&revision=259472
  http://svnweb.freebsd.org/base?view=revision&revision=259474
  http://svnweb.freebsd.org/base?view=revision&revision=259476
  http://svnweb.freebsd.org/base?view=revision&revision=259477
  http://svnweb.freebsd.org/base?view=revision&revision=259478
  http://svnweb.freebsd.org/base?view=revision&revision=259479
  http://svnweb.freebsd.org/base?view=revision&revision=259480
  http://svnweb.freebsd.org/base?view=revision&revision=259481
  
  Needless to say, I'm going to take that break now.
  -- 
  Devin
  
Thanks.  I waited around for a daily snapshot builder to make an ISO
containing these changes, and last night this one was built:

https://pub.allbsd.org/FreeBSD-snapshots/amd64-amd64/11.0-HEAD-r259514-JPSNAP/

It seems to work for 1 or 2 disks, but if I pick 3 or 4 disks
in a stripe, mirror or raidz I get:

Error: zpool
cannot open 'ada0p3.nopzpool': no such GEOM provider
must be a full path or shorthand device name

followed by:

Error: zfs
cannot open 'zroot': dataset does not exist

I did my testing in virtualbox with 4 5g disk images.  Whether or not the
installer continues, it does seem to partition the disks as discussed.

When I hit F3 I see:
DEBUG: zfs_create_boot: Creating root pool...
DEBUG: zfs_create_boot: zpool create -o altroot=/mnt -m none -f "zroot" ada0p3.nop ada1p3.nopzpool create ada2p3.nop ""
DEBUG: zfs_create_boot: retval=1 <output below>
cannot open 'ada1p3.nopzpool': no such GEOM provider


More information about the freebsd-stable mailing list