ZFS-only booting on FreeBSD

Matthew Seaman m.seaman at infracaninophile.co.uk
Sat Feb 19 14:44:43 UTC 2011


On 19/02/2011 13:18, Daniel Staal wrote:
>> Why wouldn't it be?  The configuration in the Wiki article sets aside a
>> small freebsd-boot partition on each drive, and the instructions tell
>> you to install boot blocks as part of that partitioning process.  You
>> would have to repeat those steps when you install your replacement drive
>> before you added the new disk into your zpool.
>>
>> So long as the BIOS can read the bootcode from one or other drives, and
>> can then access /boot/zfs/zpool.cache to learn about what zpools you
>> have, then the system should boot.
> 
> So, assuming a forgetful sysadmin (or someone who is new didn't know
> about the setup in the first place) is that a yes or a no for the
> one-drive replaced case?

Umm... a sufficiently forgetful sysadmin can break *anything*.  This
isn't really a fair test: forgetting to write the boot blocks onto a
disk could similarly render a UFS based system unbootable.   That's why
scripting this sort of stuff is a really good idea.   Any new sysadmin
should of course be referred to the copious and accurate documentation
detailing exactly the steps needed to replace a drive...

ZFS is definitely advantageous in this respect, because the sysadmin has
to do fewer steps to repair a failed drive, so there's less opportunity
for anything to be missed out or got wrong.

The best solution in this respect is one where you can simply unplug the
dead drive and plug in the replacement.  You can do that with many
hardware RAID systems, but you're going to have to pay a premium price
for them.  Also, you loose out on the general day-to-day benefits of
using ZFS.

> It definitely is a 'no' for the all-drives replaced case, as I
> suspected: You would need to have repeated the partitioning manually. 
> (And not letting ZFS handle it.)

Oh, assuming your sysadmins consistently fail to replace the drives
correctly, then depending on your BIOS you can be in deep do-do as far
as rebooting goes rather sooner than that.

> If a single disk failure in the zpool can render the machine
> unbootable, it's better yet to have a dedicated bootloader drive

If a single disk failure renders your system unbootable, then you're
doing it wrong.  ZFS-root systems should certainly reboot if zfs can
still assemble the root pool -- so with one disk failed for RAIDZ1, or
two for RAIDZ2 or up to half the drives for mirror.

If this failure to correctly replace broken drives is going to be a
significant problem in your environment, then I guess you're going to
have to define appropriate processes.  You might say that in the event
of a hard drive being replaced, it is mandatory to book some planned
downtime at the next convenient point, and do a test reboot + apply any
remedial work needed.  If your system design is such that you can't take
any one machine down for maintenance, even with advance warning then
you've got more important problems to solve before you worry about using
ZFS or not.

	Cheers,

	Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
JID: matthew at infracaninophile.co.uk               Kent, CT11 9PW

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 267 bytes
Desc: OpenPGP digital signature
Url : http://lists.freebsd.org/pipermail/freebsd-questions/attachments/20110219/66a8256d/signature.pgp


More information about the freebsd-questions mailing list