ZFS w/failing drives - any equivalent of Solaris FMA?

Zaphod Beeblebrox zbeeble at gmail.com
Fri Sep 12 16:04:30 UTC 2008


On Fri, Sep 12, 2008 at 11:44 AM, Oliver Fromme <olli at lurza.secnetix.de>wrote:


> Did you try "atacontrol detach" to remove the disk from
> the bus?  I haven't tried that with ZFS, but gmirror
> automatically detects when a disk has gone away, and
> doesn't try to do anything with it anymore.  It certainly
> should not hang the machine.  After all, what's the
> purpose of a RAID when you have to reboot upon drive
> failure.  ;-)


To be fair, many "home" users run RAID without the expectation of being able
to hot swap the drives.  While RAID can provide high availability, but it
can also provide simple data security.

In my home environment, I have a number of machines running.  I have a few
things on non-redundant disks --- mostly operating systems or local archives
of internet data (like a cvsup server, for instance).  Those disks can be
lost, and while it's a nuisance, it's not catastrophic.

Other things (from family photos to mp3s to other media) I keep on home RAID
arrays.  They're not hot swap... but I've had quite a few disks go bad over
the years.  I actually welcome ZFS for this --- the idea that checksums are
kepts makes me feel a lot more secure about my data.  I have observed some
bitrot over time on some data.

To your point... I suppose you have to reboot at some point after the drive
failure, but my experience has been that the reboot has been under my
control some time after the failure (usually when I have the replacement
drive).

For the home user, this can be quite inexpensive, too.  I've found a case
that can take 19 drives internally (and has good cooling for about $125).
If you used some of the 5-to-3 drive bays, that number would increase to 25.

About the only real improvement I'd like to see in this setup is the ability
to spin down idle drives.  That would be an ideal setup for the home RAID
array.


More information about the freebsd-hackers mailing list