ZFS-only booting on FreeBSD

krad kraduk at gmail.com
Sat Feb 19 17:29:44 UTC 2011


On 19 February 2011 15:35, Daniel Staal <DStaal at usa.net> wrote:
> --As of February 19, 2011 2:44:38 PM +0000, Matthew Seaman is alleged to
> have said:
>
>> Umm... a sufficiently forgetful sysadmin can break *anything*.  This
>> isn't really a fair test: forgetting to write the boot blocks onto a
>> disk could similarly render a UFS based system unbootable.   That's why
>> scripting this sort of stuff is a really good idea.   Any new sysadmin
>> should of course be referred to the copious and accurate documentation
>> detailing exactly the steps needed to replace a drive...
>>
>> ZFS is definitely advantageous in this respect, because the sysadmin has
>> to do fewer steps to repair a failed drive, so there's less opportunity
>> for anything to be missed out or got wrong.
>>
>> The best solution in this respect is one where you can simply unplug the
>> dead drive and plug in the replacement.  You can do that with many
>> hardware RAID systems, but you're going to have to pay a premium price
>> for them.  Also, you loose out on the general day-to-day benefits of
>> using ZFS.
>
> --As for the rest, it is mine.
>
> True, best case is hardware RAID for this specific problem.  What I'm
> looking at here is basically reducing the surprise: A ZFS pool being used as
> the boot drive has the 'surprising' behavior that if you replace a drive
> using the instructions from the man pages or a naive Google search, you will
> have a drive that *appears* to work, until some point later where you
> attempt to reboot your system.  (At which point you will need to start
> over.)  To avoid this you need to read local documentation and/or remember
> that there is something beyond the man pages needs to be done.
>
> With a normal UFS/etc. filesystem the standard failure recovery systems will
> point out that this is a boot drive, and handle as necessary.  It will
> either work or not, it will never *appear* to work, and then fail at some
> future point from a current error.  It might be more steps to repair a
> specific drive, but all the steps are handled together.
>
> Basically, if a ZFS boot drive fails, you are likely to get the following
> scenario:
> 1) 'What do I need to do to replace a disk in the ZFS pool?'
> 2) 'Oh, that's easy.'  Replaces disk.
> 3) System fails to boot at some later point.
> 4) 'Oh, right, you need to do this *as well* on the *boot* pool...'
>
> Where if a UFS boot drive fails on an otherwise ZFS system, you'll get:
> 1) 'What's this drive?'
> 2) 'Oh, so how do I set that up again?'
> 3) Set up replacement boot drive.
>
> The first situation hides that it's a special case, where the second one
> doesn't.
>
> To avoid the first scenario you need to make sure your sysadmins are
> following *local* (and probably out-of-band) docs, and aware of potential
> problems.  And awake.  ;)  The scenario in the second situation presents
> it's problem as a unified package, and you can rely on normal levels of
> alertness to be able to handle it correctly.  (The sysadmin will realize it
> needs to be set up as a boot device because it's the boot device.  ;)  It
> may be complicated, but it's *obviously* complicated.)
>
> I'm still not clear on whether a ZFS-only system will boot with a failed
> drive in the root ZFS pool.  Once booted, of course a decent ZFS setup
> should be able to recover from the failed drive.  But the question is if the
> FreeBSD boot process will handle the redundancy or not.  At this point I'm
> actually guessing it will, which of course only exasperates the above
> surprise problem: 'The easy ZFS disk replacement procedure *did* work in the
> past, why did it cause a problem now?'  (And conceivably it could cause
> *major* data problems at that point, as ZFS will *grow* a pool quite easily,
> but *shrinking* one is a problem.)
>
> Daniel T. Staal
>
> ---------------------------------------------------------------
> This email copyright the author.  Unless otherwise noted, you
> are expressly allowed to retransmit, quote, or otherwise use
> the contents for non-commercial purposes.  This copyright will
> expire 5 year s after the author's death, or in 30 years,
> whichever is longer, unless such a period is in excess of
> local copyright law.
> ---------------------------------------------------------------
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
>

on slightly different note, make sure you align your partitions so the
zfs partitions 1st sector is divisible by 8, eg 1st sector 2048. Also
when you create the zpool, use the gnop -s 4096 trick to make sure the
pool has ashift=12. You may not be using advanced format drives yet,
but when you do in the future you will be glad you started out like
this.


More information about the freebsd-questions mailing list