ZFS under FreeBSD failure modes

Paul Kraus paul at kraus-haus.org
Wed Aug 6 01:19:53 UTC 2014


On Aug 2, 2014, at 17:38, kpneal at pobox.com wrote:

> I'd be careful running ZFS on a machine that lacks ECC memory. Lots of
> people do it, but I'd be worried that ZFS would get itself into a state
> where you couldn't access anything.

I am startring to see comments like this on a more frequent basis. What is the failure mechanism you expect to run into here?

> UFS I believe handles some kinds of
> damage better than ZFS.

Can you please be specific. I reed asking this question in another thread and just received snide comments back. What *specific* failure modes, and I am looking for technical details here, does UFS handle better than ZFS and why? What is it about ZFS that does not handle that failure?

> When was the last time anyone heard of a UFS file
> system being so damaged that it couldn't be recovered?

Anecdotal evidence at best. I have plenty of anecdotal evidence that ZFS never looses data. I don’t claim it as fact. 

In the early years of ZFS (and in the early years of ZFS under FreeBSD) it was much more picky about how you did things. One example; an absolute rule of mine was to never, ever relocate drives from an IMPORTED zpool. I had seen too many reports of zpools being corrupted or otherwise rendered unable to be mounted when drives were moved around. Both the ZFS code and the underlying device driver code is much better today, so it is much less of an issue (but I still try to avoid it, I EXPORT the pool before I make any hardware changes).

--
Paul Kraus
paul at kraus-haus.org



More information about the freebsd-questions mailing list