LSI MegaRAID SAS 9240 with mfi driver?
Jan Mikkelsen
janm at transactionware.com
Fri Mar 30 22:52:01 UTC 2012
On 31/03/2012, at 9:21 AM, Doug Ambrisko wrote:
> Jan Mikkelsen writes:
> | I don't know what changes Sean did. Are they in 9.0-release, or do I
> | need -stable after a certain point? I'm assuming I should be able to
> | take src/sys/dev/mfi/... and src/usr.sbin/mfiutil/... from -current.
>
> It's in the SVN project/head_mfi repro. You can browse it via the web at:
> http://svnweb.freebsd.org/base/projects/head_mfi/
>
> It's not in -current yet. I'm working on the. I just did all the
> merges to a look try and eye'd them over. Now doing a compile test
> then I can check it into -current.
OK, will check it out.
> | The performance is an interesting thing. The write performance I care
> | about is ZFS raidz2 with 6 x JBOD disks (or 6 x single disk raid0) on
> | this controller. The 9261 with a BBU performs well but obviously costs more.
>
> There will need to be clarification in the future. JBOD is not that
> same as a single disk RAID. If I remember correctly, when doing some
> JBOD testing version single disk RAID is that JBOD is slower. A
> single disk RAID is faster since it can use the RAID. However, without
> the battery then you risk losing data on power outage etc. Without the
> battery then performance of a JBOD and single disk RAID should be able
> the same.
>
> A real JBOD as shown by LSI's firmware etc. shows up as a /dev/mfisyspd<n>
> entries. JBOD by LSI is a newer thing.
Ok, interesting. I was told by the distributor that the 9240 supports JBOD mode, but the 9261 doesn't. I'm interested to test it out with ZFS.
>
> | I can see the BBU being important for controller based raid5, but I'm
> | hoping that ZFS with JBOD will still perform well. I'm ignorant at this
> | point, so that's why I'm trying it out. Do you have any experience or
> | expectations with a 9240 being used in a setup like that?
>
> The battery or NVRAM doesn't matter on the RAID type being used since the
> cache in NVRAM mode, says done whenever it has space in the cache for the
> write. Eventually, it will hit the disk. Without the cache working in
> this mode the write can't be acknowledged until the disk says done. So
> performance suffers. With a single disk RAID you have been using the
> cache.
With RAID-5 it is important because a single update requires two writes and a failure in the window where one write has completed and one write has not could cause data corruption. I don't know whether the controller really handles this case.
I guess I'm hopeful that ZFS will perform the function performed by the NVRAM on the controller. I can see how the controller in isolation is clearly slower without a BBU because it has to expose the higher layers to the disk latency.
> Now you can force using the cache without NVRAM but you have to acknowledge
> the risk of that.
Yes, I understand the risk, and it is one I do not want to take. All the 9261s I have deployed have a BBU and go into write through mode if the battery has a problem.
I think I need to test it in the context of ZFS and see how it works without controller NVRAM.
Regards,
Jan.
More information about the freebsd-stable
mailing list