RAID Performance Questions

Milo Hyson milo at cyberlifelabs.com
Thu Jan 25 19:30:42 UTC 2007


I don't really have a whole lot of experience with RAID, so I was  
wondering if the performance figures I'm seeing are normal or if I  
just need to tweak things a bit. Based on what I've been reading, I  
would expect more significant improvements over a single drive.  
Here's my setup:

* FreeBSD 5.4-RELEASE-p22
* AMD Athlon 2200+
* 512 MB RAM
* 3ware 9500S-8 RAID controller
* 8 x Maxtor 7Y250M0 drives (SATA150 - 250 GB each)
* 1 x UDMA100 system drive

I'm using a trimmed-down but otherwise stock kernel (see below). The  
array is configured as two units: a three-drive RAID 5 and a four- 
drive RAID 10. Both units have been fully initialized and verified.  
No errors or warnings are being issued by the controller --  
everything is green. Using bonnie I get the following results with a  
1.5 GB file:

               -------Sequential Output-------- ---Sequential Input--  
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block---  
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % 
CPU  /sec %CPU
single   1536 42229 45.1 44379 19.4 17227  7.7 40819 41.6 44772 12.1  
141.1  0.7
raid5    1536 21812 22.8 21876  8.7 12935  5.9 47283 48.3 61998 17.0  
152.8  0.8
raid10   1536 21905 23.0 21999  8.6 14878  6.7 49036 50.1 64847 17.7  
130.6  0.7

The write times of both RAID configurations are slower than the  
single drive (which is expected due to having to write to multiple  
drives). However, I wasn't expecting such a drastic reduction (about  
50%). The read times, although faster, are only marginally so in per- 
char transfer. They're a bit better in block performance, but still  
not what I would expect. It would seem to me that a read spread  
across four drives should see more than a 45% performance increase.  
The highest rate recorded here is only a quarter of the PCI bus- 
speed, so I doubt that's a bottleneck. CPU load peaks at 50%, so I  
don't see that being a problem either.

I also ran some performance tests with a stock build of PostgreSQL  
8.0 to get a different angle on things. Two tests were run on each of  
the UDMA system drive, the RAID 5 unit, and the RAID 10 unit. The  
first tested sequential-scans through a 58,000+ record table. The  
second tested random index-scans of the same table. These were read- 
only tests -- no write tests were performed. The results are as follows:

Unit      Seq/sec    Index/sec
------------------------------
single      0.550     2048.983
raid5       0.533     2063.900
raid10      0.533     2093.283

Any performance benefit of RAID in these tests is almost nonexistent.  
Am I doing something wrong? Am I expecting too much? Any advice that  
can be offered in this area would be much appreciated.

Here is my kernel config (the twa driver is loaded as a module):

machine         i386
cpu             I686_CPU
ident           NAS-20070124

options         SCHED_4BSD              # 4BSD scheduler
options         INET                    # InterNETworking
options         FFS                     # Berkeley Fast Filesystem
options         SOFTUPDATES             # Enable FFS soft updates  
support
options         UFS_ACL                 # Support for access control  
lists
options         UFS_DIRHASH             # Improve performance on big  
directories
options         NFSCLIENT               # Network Filesystem Client
options         NFSSERVER               # Network Filesystem Server
options         CD9660                  # ISO 9660 Filesystem
options         PROCFS                  # Process filesystem  
(requires PSEUDOFS)
options         PSEUDOFS                # Pseudo-filesystem framework
options         COMPAT_43               # Compatible with BSD 4.3  
[KEEP THIS!]
options         COMPAT_FREEBSD4         # Compatible with FreeBSD4
options         SCSI_DELAY=15000        # Delay (in ms) before  
probing SCSI
options         SYSVSHM                 # SYSV-style shared memory
options         SYSVMSG                 # SYSV-style message queues
options         SYSVSEM                 # SYSV-style semaphores
options         _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real- 
time extensions
options         ADAPTIVE_GIANT          # Giant mutex is adaptive.

device          apic                    # I/O APIC

# Bus support.  Do not remove isa, even if you have no isa slots
device          isa
device          pci

# ATA and ATAPI devices
device          ata
device          atadisk         # ATA disk drives
device          atapicd         # ATAPI CDROM drives
options         ATA_STATIC_ID   # Static device numbering

# SCSI support
device          scbus           # SCSI bus (required for SCSI)
device          da              # Direct Access (disks)

# atkbdc0 controls both the keyboard and the PS/2 mouse
device          atkbdc          # AT keyboard controller
device          atkbd           # AT keyboard

device          vga             # VGA video card driver

# syscons is the default console driver, resembling an SCO console
device          sc

# Floating point support - do not disable.
device          npx

# Serial (COM) ports
device          sio             # 8250, 16[45]50 based serial ports

# PCI Ethernet NICs that use the common MII bus controller code.
# NOTE: Be sure to keep the 'device miibus' line in order to use  
these NICs!
device          miibus          # MII bus support
device          xl              # 3com 10/100

# Pseudo devices.
device          loop            # Network loopback
device          mem             # Memory and kernel memory devices
device          io              # I/O device
device          random          # Entropy device
device          ether           # Ethernet support
device          pty             # Pseudo-ttys (telnet etc)

--
Milo Hyson
CyberLife Labs




More information about the freebsd-questions mailing list