LSI MegaRAID SAS 9240 with mfi driver?
ambrisko at ambrisko.com
Fri Mar 30 22:21:09 UTC 2012
Jan Mikkelsen writes:
| On 31/03/2012, at 1:14 AM, Doug Ambrisko wrote:
| > John Baldwin writes:
| > | On Friday, March 30, 2012 12:06:40 am Jan Mikkelsen wrote:
| > | ...
| > | > Is this path likely to work out? Any suggestions on where to go from here?
| > |
| > | You should try the updated mfi(4) driver that Doug (cc'd) is going to soon
| > | merge into HEAD. It syncs up with the mfi(4) driver on LSI's website which
| > | supports several cards that the current mfi(4) driver does not. (I'm not
| > | fully sure if the 9240 is in that group or not. Doug might know however.)
| > Yes, this card is supported with the mfi(4) in projects/head_mfi. Looks
| > like we fixed a couple of last minute found bugs when trying to create a
| > RAID wth mfiutil. This should be fixed now. I'm going to start the
| > merge to -current today. The version in head_mfi can run on older
| > versions of FreeBSD with the changes that Sean did.
| > Note that I wouldn't recomend the 9240 since it can't have a battery
| > option. NVRAM is the key to the speed of mfi(4) cards. However, that
| > won't stop us from supporting
| I don't know what changes Sean did. Are they in 9.0-release, or do I
| need -stable after a certain point? I'm assuming I should be able to
| take src/sys/dev/mfi/... and src/usr.sbin/mfiutil/... from -current.
It's in the SVN project/head_mfi repro. You can browse it via the web at:
It's not in -current yet. I'm working on the. I just did all the
merges to a look try and eye'd them over. Now doing a compile test
then I can check it into -current.
| The performance is an interesting thing. The write performance I care
| about is ZFS raidz2 with 6 x JBOD disks (or 6 x single disk raid0) on
| this controller. The 9261 with a BBU performs well but obviously costs more.
There will need to be clarification in the future. JBOD is not that
same as a single disk RAID. If I remember correctly, when doing some
JBOD testing version single disk RAID is that JBOD is slower. A
single disk RAID is faster since it can use the RAID. However, without
the battery then you risk losing data on power outage etc. Without the
battery then performance of a JBOD and single disk RAID should be able
A real JBOD as shown by LSI's firmware etc. shows up as a /dev/mfisyspd<n>
entries. JBOD by LSI is a newer thing.
| I can see the BBU being important for controller based raid5, but I'm
| hoping that ZFS with JBOD will still perform well. I'm ignorant at this
| point, so that's why I'm trying it out. Do you have any experience or
| expectations with a 9240 being used in a setup like that?
The battery or NVRAM doesn't matter on the RAID type being used since the
cache in NVRAM mode, says done whenever it has space in the cache for the
write. Eventually, it will hit the disk. Without the cache working in
this mode the write can't be acknowledged until the disk says done. So
performance suffers. With a single disk RAID you have been using the
Now you can force using the cache without NVRAM but you have to acknowledge
the risk of that.
More information about the freebsd-stable