Booting from an aribrary disk in ZFS RAIDZ on 8.x

Doug Poland doug at polands.org
Thu Mar 7 14:03:21 UTC 2013


On Thu, Mar 07, 2013 at 03:11:29PM +1030, Shane Ambler wrote:
> On 06/03/2013 14:54, Doug Poland wrote:
> >On Wed, Mar 06, 2013 at 01:26:07PM +1030, Shane Ambler wrote:
> >>On 06/03/2013 05:14, Doug Poland wrote:
> 
> >>>I have 6 disks in a RAIDZ configuration.  All disks were sliced the
> >>>same with gpart (da(n)p1,p2,p3) with bootcode written to index 1,
> >>>swap on index 2 and freebsd-zfs on index 3.
> >>>
> >>>Given this configuration, I should be able to boot from any of the
> >>>6 disks in the RAIDZ.  If this is a true statement, how do I make
> >>>that happen from the loader prompt?
> >>
> >>You don't boot from an individual disk you boot from a zpool - all
> >>disks are linked together making one zpool "disk".
> >>
> >Something has to pick a physical device from which to boot, does it
> >not?.  All the HP Smart Array 6i controller knows is I have 6 RAID 0
> >disks to present to the OS.
> 
> I meant to add if the bootcode is installed on each disk then pointing
> the bios to any individual disk as the primary boot device will lead
> to the boot process loading the zpool. Installing it on each disk
> gives the redundancy to match the raid in the zpool. If you only have
> one disk with bootcode and it is the one that needs replacing then you
> can't boot. Then having 100 disks in a pool with bootcode would be
> overkill, but the consistency may be easier to maintain.
> 
So in my case, the HP SmartArray doesn't allow me to choose an
individual boot disk.  So it's up to the controller to keep trying to
boot from the next configured disk.  I believe I'm going to craft a
test to prove this out.

> >I've had issues with this RAID controller in the past where it won't
> >present the new disk to the OS.  I've had to reboot, go into the
> >RAID config and tell it it's a single RAID 0 device (stupid, I
> >know).
> 
> When you think about it, as a raid controller it shouldn't make
> assumptions as to how to use the new disk, should it add it to an
> existing raid set, replace a missing drive or show it as a new single
> drive? Being able to specify per socket as permanently jbod could be
> useful feature though.
> 
One would think.  I've been testing this on a similarly configured
machine and the controller eventually presents a new drive to the OS.
It takes a couple of minutes, but appears to work on this test box.

> >The roll of /boot/zfs/zpool.cache is a mystery to me.  I belive it
> >somehow tells ZFS what devices are in use.  What if a disk goes
> >offline or is removed?
> >
> 
> As I understand it the zpool.cache contains the zpools mounted by the
> system. After reboot it then re-imports each zpool in the cache. I
> believe a recent commit enabled the vfs.root.mountfrom zpool to be
> imported even if there was no cache available.
> 
> From what I have heard and seen the data about the zpool it belongs to
> and the role the disk plays in the zpool is stored on each disk and
> duplicated at the beginning and end of the disk. In my early
> experiments after starting clean even after gparting and zeroing out
> the start of the disks, zpool still says it belongs to a pool.
> 
If that's the case, I wonder about the wisdom of re-using a drive from
my test configuration?  My plan has been to prove this out on test and
use the same disk from test and insert it into production.  One would
think ZFS is smart enough to recognize a "different" drive has been
inserted, even if it has the same gpart structure and came from a pool
with the same name.

Thanks for your help.

-- 
Regards,
Doug


More information about the freebsd-questions mailing list