LSI Megaraid (amr) performance woes

Sven Willenberger sven at dmv.com
Thu Feb 23 12:39:30 PST 2006


I am having some issues getting any (write) performance out of an LSi
Megaraid (320-1) SCSI raid card (using the amr driver). The system is an
i386 (p4 xeon) with on-board adaptec scsi controllers and a SUPER GEM318
Saf-te backplane with 6 ea 146GB U320 10k rpm Hitachi drives.

dmesg highlights at message end.

The main problem I am having is getting anywhere near a decent write
performance using the card. I compared having the backplane connected to
the on-board adaptec controller to having it connected to the LSi
controller.

I tried 3 methods of benchmarking. "Adaptec Connected" involved using
the on-board adaptec scsi controller, "LSi Connected" involved using the
LSI controller as a simple controller having each drive its own logical
raid0 drive. LSI write-through and write-back involved using the LSi
controller to set up 2 single raid0 drives as their own logical unit and
a "spanned" mirror of 4 drives (raid10) as a logical unit (write-back
and write-through simply referring to the write method used).

In the case of the "XXX Connected" setup, I created a raid10
configuration with 4 of the drives as follows (shown is the commands for
adaptec .. for lsi I simply used amrd2 amrd3 etc for the drives).

gmirror label -b load md1 da2 da3
gmirror label -b load md2 da4 da5
gmirror load
gstripe label -s 65536 md0 /dev/mirror/md1 /dev/mirror/md2
newfs /dev/stripe/md0
mkdir /bench
mount /dev/stripe/md0 /bench

to test read and write performance I used dd as follows:

dd if=/dev/zero of=/raid_or_single_drive/bench64 bs=64k count=32768
which created 2GB files.

The summary of results (measured in bytes/sec) is as follows:

                        |     SINGLE DRIVE    |     RAID DRIVE       |
Connection Method       |  Write   |   Read   |  Write   |   Read    |
------------------------|---------------------|----------------------|
adaptec connected       | 58808057 | 78188838 | 78625494 | 127331944 |
lsi singles             | 43944507 | 81238863 | 95104511 | 111626492 |
lsi write-through       | 45716204 | 81748996 |*10299554*| 108620637 |
lsi write-back          | 31689131 | 37241934 | 50382152 |  56053085 |

With the drives connected to the adaptec controller and using geom, I
get the expected increase in write and read performance when moving from
a single drive to a raid10 system. Likewise, when using the LSI
controller to manage the drives as single units and using geom to create
the raid, I get a marked increase in write performance (less of a read
increase). 

However, when using the LSI to create the raid, I end up with a
*miserable* 10MB/sec write speed (while achieving acceptable read
speeds) in write-through mode and mediocre write speeds in write-back
mode (which, without a battery-backed raid card I would rather not do)
and, for some reason, a marked decrease in read speeds (over the
write-through values).

So the question arises as to whether this is an issue with the way the
LSI card (320-1) handles "spans" (which I call stripes - versus mirrors)
or the way the amr driver views such spans, or an issue with the card
not playing nicely with the supermicro motherboard, or perhaps even a
defective card. Has anyone else had experience with this card and
motherboard combination?

As a side note, I also tried dragonfly-bsd (1.4.0) which also uses the
amr driver and experienced similar results, and linux (slackware 10.2
default install) which showed write speeds of 45MB/s or so and read
speeds of 140MB/s or so using the default LSI controller settings
(write-through, 64k stripe size, etc.)

Any help or ideas here would be really appreciated in an effort to get
anywhere near acceptable write speeds without relying on the unsafe
write-back method or excessively sacrificing read speeds.

************************
dmesg highlights:
FreeBSD 6.0-RELEASE #0: Thu Nov  3 09:36:13 UTC 2005
    root at x64.samsco.home:/usr/obj/usr/src/sys/GENERIC
ACPI APIC Table: <PTLTD          APIC  >
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Xeon(TM) CPU 2.80GHz (2799.22-MHz 686-class CPU)
  Origin = "GenuineIntel"  Id = 0xf29  Stepping = 9

Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x4400<CNTX-ID,<b14>>
  Hyperthreading: 2 logical CPUs
real memory  = 1073217536 (1023 MB)
avail memory = 1041264640 (993 MB)

pcib5: <ACPI PCI-PCI bridge> at device 29.0 on pci4
pci5: <ACPI PCI bus> on pcib5
amr0: <LSILogic MegaRAID 1.51> mem 0xfe200000-0xfe20ffff irq 96 at
device 1.0 on pci5
amr0: <LSILogic MegaRAID SCSI 320-1> Firmware 1L37, BIOS G119, 64MB RAM
pci4: <base peripheral, interrupt controller> at device 30.0 (no driver
attached)
pcib6: <ACPI PCI-PCI bridge> at device 31.0 on pci4
pci6: <ACPI PCI bus> on pcib6
ahd0: <Adaptec AIC7902 Ultra320 SCSI adapter> port
0x4400-0x44ff,0x4000-0x40ff mem 0xfc400000-0xfc401fff irq 76 at device
2.0 on pci6
ahd0: [GIANT-LOCKED]
aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
ahd1: <Adaptec AIC7902 Ultra320 SCSI adapter> port
0x4c00-0x4cff,0x4800-0x48ff mem 0xfc402000-0xfc403fff irq 77 at device
2.1 on pci6
ahd1: [GIANT-LOCKED]
aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs


Thanks,

Sven



More information about the freebsd-stable mailing list