SATA Raid (stress test..)

Nikolas Britton nikolas.britton at gmail.com
Fri Mar 3 09:51:18 PST 2006


On 3/3/06, Alex Zbyslaw <xfb52 at dial.pipex.com> wrote:
> Nikolas Britton wrote:
>
> >>Please can you be careful when you attribute your comments.  You've sent
> >>this email "to" me, and left only my name in the attributions as if I
> >>were someone suggesting either dd or diskinfo as accurate benchmarks,
> >>when in fact my contribution was to suggest unixbench and sandra-lite.
> >>Maybe you hate those too, in which case you can quote what I said
> >>in-context and rubbish that at your pleasure.
> >>
> >>
> >
> >Yes I see your point, it does look like I'm replying to something you
> >wrote. This was a oversight and I am sorry.
> >
> >
> OK.
>
> >Remember that 105MB/s number I quoted above?, that's just the
> >sustained read transfer rate for a big ass file, I don't need to work
> >with big ass files. I need to work with 15MB files (+/- 5MB). After
> >buying the right disks, controller, mainboard etc. and lots of tuning
> >with the help of iozone I get: 200 - 350MB/s overall (read, write,
> >etc.) for files less then or equal to 64MB*.
> >
> >So anyways, that's what iozone can do for you. google it and you'll
> >find out more stuff about it.
> >
> >
> Thanks for the info.  I think I can only dream about numbers like like
> yours.  Iozone looks to be in the ports so I see some of my weekend
> disappearing looking at it :-)
>

It runs on over two dozen operating systems, including windows. Their
are two primary reasons I can get such high transfer rates from simple
SATA drives. The first one was the selection of the mainboard that had
a PCI-X slots, I built this system before PCI-Express mainboards and
controllers hit the market. The PCI bus is severely restricted and
obsolete, I'm simply going to post the theoretical maximum throughput
in MB/s for the various bus standards:

f(x,y) = x-bits * y-MHz / 8 = maximum theoretical throughput in MB/s

PCI: 32 bits * 33 Mhz / 8 = 132 MB/s (standard PCI bus found on every pc)
PCI: (32bits, 66MHz) = 264MB/s (Cards are commonplace, mainboards aren't)
PCI-X: (64, 33) = 264MB/s (obsolete, won't find it on new boards.)
PCI-X: (64, 66) = 528MB/s (Commonplace.)
PCI-X: (64, 100) = 800
PCI-X: (64, 133) = 1064 (Commonplace.)
PCI-X: (64, 266) = 2128
PCI-X: (64, 533) = 4264 (very hard to find, even on high-end equipment.)

PCI-X version 1 (66MHz - 133MHz) and PCI-X version 2 (266MHz -
533MHz). PCI-X is backwards compatible with PCI and slower versions of
PCI-X, for example you can put a standard PCI card in a PCI-X 533MHz
slot and it will simply run at (32, 33) similarly a 66 MHz PCI card
will run at (32, 66) and so on and so forth. PCI-X is also forwards
compatible in the fact that you can run a 133MHz PCI-X card in a
standard (32, 33) PCI slot. Because of the backwards and an forwards
compatibly I feel that PCI-X is superior to PCI-Express, *BUT*
PCI-Express moving forwards is far far superior to PCI & PCI-X because
it does not have 13 years of legacy to remain compatible with, it's
cheaper to produce, and it's already in lower-end desktop systems as a
replacement for AGP thanks to all the gamers. A few years from now PCI
will end up where ISA / EISA are. I'm veering way off topic so I will
not go into anymore details about PCI, PCI-X, and PCI-Express. Google
around for the shortcomings of PCI / PCI-X and why PCI-Express is the
future.

PCI-Express: PCIe is not compatible with PCI or PCI-X (except for PCIe
to PCI bridging) and it's just, well, totally different from the PCI
spec and I'm already way off topic so again just google the details.
It's theoretical maximums are expressed in Gigabits per second but I
will convert them to MB/s for comparison with PCI and PCI-X.

x1: 2.5Gbps = 312.5MB/s
x2: 625MB/s
x4: 1250MB/s
x8: 2500MB/s
x12: 3750MB/s
x16: 5000MB/s
x32: 10000MB/s

Anyways back on topic, what was the topic? Oh yes, why you won't see
200MB/s - 350MB/s if your using a standard PCI slot. If you look back
up all the way at the top you will see that the standard PCI bus is a
crap shoot and that it's limited to a theoretical maximum of 132 MB/s.
What this means is that your RAID controller and the disks attached to
it and the cache buffers attached to the disks are all capped at that
theoretical maximum of 132MB/s. Then you have to take into account
that the PCI bus is shared with other devices such as the network
card, video card, USB, etc. Your RAID controller has to fight will all
these devices and a 1Gbit NIC card can eat up 125MB/s (12.5MB/s for a
100Mbit NIC).

The next reason for those high gains is because I picked drives with
16MB cache buffers and that I'm insane enough to run a production
server with the write-back cache policy enabled on the array
controller and enabling the write cache on the disks. This is stupidly
insane unless you've planned for the worsts. The worst case scenario
would be that you corrupt the array into an unrepairable state and
loose everything if you had a power failure.



--
BSD Podcasts @ http://bsdtalk.blogspot.com/


More information about the freebsd-questions mailing list