PERC 5/E SAS RAID in Dell PowerEdge 1950/2950
bjordan at lumeta.com
Thu Aug 10 18:32:01 UTC 2006
>Does anyone have details about the new PERC 5/E SAS RAID controller
>is (or will soon be) shipping in the 1950/2950?
I've got one in that I'm setting up/testing for postgres.
>This replaces the long standing PERC4 (which was an OEM LSI / AMI
>MegaRAID U320) in the [1,2]850 series.
I've used the 2850 with FreeBSD/postgres, but didn't have the need at
the time to do much tuning, so all I know is that it worked...
>Obviously this is an OEM chipset as well. I see it listed in mfi(4).
>It appears to be backported into RELENG_6.
I'm running 6.1 RELEASE amd64. It picked up the mfi device just fine,
and even realized that it was a Perc5/i. However, that's about where the
Here's the hardware:
2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache per
8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)
6x300 10k RPM SAS drives
Perc 5i w/256 MB battery backed cache
DRAC5 (which I do see listed in dmesg).
Here's my experience so far (please keep in mind I'm not a FreeBSD
expert, so pointers on where I went wrong are appreciated).
1. The box came configured as a RAID 10 across all 6 disks. It appeared
to do mirroring first, then striping. I ran the following:
time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)"
this returned ~117 MB/s, which seems a bit slow for 6 spindles. I also
ran bonnie++ with similar results. Keep in mind just 1 of these SAS
drives easily pumps out a sustained 75 MB/s read/write rate.
2. Thinking there might be a problem with the controller and complex
RAID (this was discussed the Postgres performance mailing list), I tried
a RAID 5 and RAID 0 config. This resulted in the following results,
which seem more reasonable for the hardware:
RAID5 (x4 disks)
1024000000 bytes transferred in 6.375067 secs (160625763 bytes/sec)
RAID0 (x2 disks)
1024000000 bytes transferred in 7.392225 secs (138523922 bytes/sec)
Both of the above numbers, while the RAID 5 is not stellar, look
reasonable to me.
3. So initial conclusion was Perc5/I is crappy about multi levels of
raid (10, 0+1, etc). However, a coworker suggested I test on Knoppix 5.
Here's the results:
RAID5 (x4 disks)
~270 MB/s with dd on ext2 (very close to theoretical max)
RAID10 (4 disks)
mixed results- anywhere from 148 MB/s to an unrealistic 700+ MB/s (which
I attribute to caching in RAM, although issuing a sync should force it
to disk.. odd) So I ran the following:
bonnie++ -d bonnie -s 6600:8k and got ~100 Mb/s for sequential input
I'm not sure why Knoppix is so much faster than BSD 6.1 amd64 on the 4
disk RAID 5 test, but I'm going to move forward and use all 6 disks in a
4. During testing, I tried installing BSD on disk0, then setting up a
RAID0 on disks 2 & 3, and a RAID1 on disks 4 & 5 for testing basic raid
performance. Unfortunately, BSD was unable to recognize the other
volumes. In dmesg, I would see mfid0, mifid1, and mfid2, but when I
would try to mount them using sysinstall and the instructions in the
handbook, FDisk did not see the correct sizes. Furthermore, I think it
always pointed to mfid0. More specifically
A: go to FDISK, select mfid0, partition 250GB /, remainder swap
B: select mfid1, saw partitions created in (A)
C: select mfid2, saw partitions created in (A)
Note: Steps A-C were performed on a freshly initialized RAID as
described above. I have not had the chance to try the above RAID
configuration on any other OS at this point.
Sorry for the long post, but hopefully some of the above info will be
useful. If anyone has suggestions for solutions to the multiple raid set
issues (#4 above) or hints on tuning BSD I/O performance for RAID, I'd
More information about the freebsd-hardware