vinum performance

Michael C. Brenner mbrenner at kaibren.com
Mon Mar 31 19:01:39 PST 2003


At 04:41 PM 3/31/2003, Jason Andresen wrote:
>>>>>Ok. But I still don't understand why RAID 5 write performance is _so_ bad.
>>>>>The CPU is not the bottle neck, it's rather bored. And I don't understand
>>>>>why RAID 0 doesn't give a big boost at all. Is the ahc driver known to be
>>>>>slow?
>>>>
>>>>(Both of these were on previously untouched files to prevent any 
>>>>caching, and the "write" test is on a new file, not rewriting an old one)
>Write speed:
>81920000 bytes transferred in 3.761307 secs (21779663 bytes/sec)
>Read speed:
>81920000 bytes transferred in 3.488978 secs (23479655 bytes/sec)
>
>But on the RAID5:
>Write speed:
>81920000 bytes transferred in 17.651300 secs (4641018 bytes/sec)
>Read speed:
>81920000 bytes transferred in 4.304083 secs (19033090 bytes/sec)

Writing to a RAID5 stripe set requires that all disks in the array 
successfully report completion before the RAID5 controller's buffer can be 
released back to the cache. (Applies to either software or hardware raid.) 
If you are doing a large block write (like dd) you can easily fill the 
cache on most controllers. Once the cache is full, the controller slows 
each write to the LONGEST completion time of each spindle in the array. ECC 
calculation becomes part of the latency also. In a 5 drive system (other 
than one where the cache is larger than the largest file being written as 
in a large EMC array) the writes are always about 4-5 times longer than the 
reads. Tuning stripes and blocking factors can speed up a specific transfer 
but RAID5 has always been slow to write large data and best for read mostly 
data.

Read operations benefit from RAID5 or mirrors. Now the shortest completion 
time of the minimal drive set is the gating event. The first set of drives 
to deliver the data block ends the operation. This makes a 2 to 1 
difference into a 4 to one difference.

MB



More information about the freebsd-stable mailing list