ZFS...

Borja Marcos borjam at sarenet.es
Fri May 3 07:09:43 UTC 2019



> On 1 May 2019, at 04:26, Michelle Sullivan <michelle at sorbs.net> wrote:
> 
>        mfid8   ONLINE       0     0     0

Anyway I think this is a mistake (mfid). I know, HBA makers have been insisting on having their firmware getting in the middle,
which is a bad thing.

The right way to use disks is to give ZFS access to the plain CAM devices, not thorugh some so-called JBOD on a RAID
controller which, at least for a long time, has been a *logical* “RAID0” volume on a single disk. That additional layer can 
completely break the semantics of transaction writes and cache flushes. 

With some older cards it can be tricky to achieve, from patching source drivers to enabling a sysctl tunable or even
flashing the card to turn it into a plain HBA with no RAID features (or minimal ones).

If your drives are not called /dev/daX or /dev/adaX you are likely to be in trouble. Unless something has really changed recently
you don’t want “mfid” or “mfisyspd”.

I have suffered hidden data corruption due to a faulty HBA and failures of old disks, and in all cases ZFS has survived brilianty.

And actually ZFS works on somewhat unreliable hardware. The problem is not non-perfect hardware, but *evil* hardware with 
firmware based on some assumptions that won’t work with ZFS. 

But I agree, non-ECC memory can be a problem. In my case all of the servers had ECC.





Borja.


More information about the freebsd-stable mailing list