(LONG) ATA Benchmark: 5.x Reads Slower than Writes
dwhite at gumbysoft.com
Fri Apr 8 23:33:41 PDT 2005
On Fri, 8 Apr 2005, Danny Howard wrote:
> I don't have the time and hardware to do very scientific tests, but I
> have been able to run a series of benchmarks using bonnie++ on some
> systems I have available to me. The ATA-based gmirror performs
> extremely well, compared to a few Adaptec RAIDs that we have, EXCEPT
> that the sequential and random reads are MUCH SLOWER than the hardware
> solution, and even *slower than the preceding write operations*. This
> is counter-intuitive, especially since RAID1 implies slowed writes and
> faster reads. I tried the benchmark on my workstation (single 2.5" IDE
> in a laptop) and got comparable write-faster-than-read results.
> The raw data can be viewed at
Could you place the 'dmesg' output for each system in this directory?
The output here is marginally useful since it shows the bonnie command
line. However, 100MB as the test filesize is really small unless the
systems have 64MB of RAM though -- otherwise you're testing how well
FreeBSD manages memory (or how much crap the systems are running when you
run this test).
For recent I/O tests I was doing with iozone I was using 10GB filesizes.
This blows out the cache on just about everything.
> Unfortunately, my hardware RAIDs are on FreeBSD 4, and gmirror is on 5.
> My hardware RAIDs are on dual CPU systems, with 2G RAM, and my gmirror
> is on a single hyperthreaded CPU with 512M. Yes, sorry, not especially
> scientific. Maybe the changes in FreeBSD make a big difference? Maybe
> RAM makes a big difference?
Yes, lots. Both 4.x vs. 5.x and RAM :)
> Sequential Read
> laptop 2.5" ATA: avg 76/s # SLOWER than write!
> gmirror ATA RAID1: avg 251/s # SLOWER than write!
> Adaptec SCSI RAID1: avg 7862/s
> Adaptec SCSI RAID10: avg 7618/s
I'd be really careful here... that is # of files read per second after the
create, and as pointed out before, SoftUpdates usually gets you a big win
until it has to flush the directories out then things suffer during the
actual flush op since the disk gets hammered. Lots of free memory to use
for the directory cache helps. The disk cache on the RAID controller is
buying you even more. From your results you were getting 9ms latency which
is spot-on so I think you are simply misinterpreting your results here.
File creation tests are usually more for filesystem-specific benchmarking
than for throughput benchmarking. I'd suggest something more like iozone
for throughput testing. If the volumes have nothing on them you care about
then rawio can also be instructive.
Doug White | FreeBSD: The Power to Serve
dwhite at gumbysoft.com | www.FreeBSD.org
More information about the freebsd-stable