LSI SAS2008 performance with mps(4) driver
telbizov at gmail.com
Fri Mar 11 04:09:31 UTC 2011
I've been testing this SAS2008 LSI chip (on a LSI 9211-8i) for the last
month or so and I can say
that it makes a pretty good HBA but there are indeed a few caveats you might
need to be aware of.
In support of that - tonight I finished a FreeBSD 8.2-STABLE machine with 2
x 24 disk chassis (each with
a 3Gbit expander) = 48 x 2TB SATA RE4-GP disks in 6 x 8disk raidz2 and I am
able to squeeze out 900MB/s
write and 1200 MB/s read in a sequential (single dd) manner. The limit here
is the backplane speed.
So back to your problem:
1) What kind of backplane are you using: please specify the exact model. Is
it a SAS expander or direct attached?
3Gbit/s or 6Gbit/s?
2) What kind of disk controller exactly are you using? More importantly what
kind of firmware does it have?
Those two are very important. In my case it turned out that if I was
connecting SAS2008 chips
to pretty much every kind of SuperMicro SAS expander backplanes (tried
against 826EL26, 836E1, 846E1) I was
getting around 200-300MB/s read/write speeds (FreeBSD and Linux). Direct
attached backplanes (826A) worked fine.
At the end it turned out that it was some sort of a problem with the LSI
firmware (version 8.00 in my case) and I was given
to try version 9.00 (soon to be released) which completely solved the
problem. Contact LSI support (very high quality) if you want to try this
I can't seem to get any better performance than about 250MB/s writes through
> the controller. I'm testing with a zpool of six 250MB magnetic SATA disks,
> doing a couple of concurrent sequential writes with dd:
> dd bs=128k if=/dev/zero of=/datadisk/zero1 &
> dd bs=128k if=/dev/zero of=/datadisk/zero2 &
3) What kind of zpool raid level do you have those disks organized in?
4) Running two parallel dd's on the same pool will actually turn the game
into no-so-sequential type and more of a random access.
Please try the following and paste results here:
4.1) dd if=/dev/zero of=/datadisk/zero1 bs=1M count=50000 (only one dd and
use a file size larger than your memory)
4.2) Destroy the zpool (if you have no useful data on it of course) and try
dd against each and every disk individually.
So something like:
dd if=/dev/zero of=/dev/da0 bs=1M count=50000
dd if=/dev/da0 of=/dev/null bs=1M count=50000
monitor throughput with gstat -f da0 or I can send you a simple C program
that I wrote which resembles dd but
prints stats every second.
On a related note I also experienced very slow read speeds (200MB/s) with
the above mentioned configuration and after enabling
prefetch (I used to set it to disabled as per Jeremy Chadwick's advise)
everything went back to normal - so keep it in mind.
More information about the freebsd-stable