FreeBSD 5.3 I/O Performance / Linux 2.6.10 and dragonfly
Mike Tancsa
mike at sentex.net
Wed Feb 2 14:19:26 PST 2005
At 04:58 PM 02/02/2005, Matthew Dillon wrote:
> Urmmm. how about a bit more information... what are the machine
> configurations?
Sorry, it was a few postings ago in the same thread. Its
Pentium(R) 4 CPU 3.00GHz 2GB RAM
Intel Gig NICs (em)
One big RAID5 partition on a 8xxx 3ware with 4 SATA drives, default options
on the 3ware.
full dmesg at
http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001077.html
Apart from the dragonfly boot CD, I had a separate IDE disk for the OSes
that I would boot from so that the partition info for the 3ware would
remain the same.
[nfs]# diskinfo -tv twed0s1d
twed0s1d
512 # sectorsize
750170179584 # mediasize in bytes (699G)
1465176132 # mediasize in sectors
91202 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
Seek times:
Full stroke: 250 iter in 4.393885 sec = 17.576 msec
Half stroke: 250 iter in 4.386907 sec = 17.548 msec
Quarter stroke: 500 iter in 6.939157 sec = 13.878 msec
Short forward: 400 iter in 2.234404 sec = 5.586 msec
Short backward: 400 iter in 2.124618 sec = 5.312 msec
Seq outer: 2048 iter in 0.360554 sec = 0.176 msec
Seq inner: 2048 iter in 0.386926 sec = 0.189 msec
Transfer rates:
outside: 102400 kbytes in 1.443528 sec = 70937 kbytes/sec
middle: 102400 kbytes in 1.399967 sec = 73145 kbytes/sec
inside: 102400 kbytes in 1.428718 sec = 71673 kbytes/sec
[nfs]#
> I can figure some things out. Clearly the BSD write numbers are dropping
> at a block size of 2048 due to vfs.write_behind being set to 1.
Interesting, I didnt know of this. I really should re-read tuning(8). What
are the dangers of setting it to zero?
> Just as
> clearly, Linux is not bothering to write out ANY data, and then able to
> take advantage of the fact that the test file is being destroyed by
> iozone (so it can throw away the data rather then write it out). This
> skews the numbers to the point where the benchmark doesn't even come
> close
> to reflecting reality, though I do believe it points to an issue with
> the BSDs ... the write_behind heuristic is completely out of date now
> and needs to be reworked.
http://www.iozone.org is what I was using to test with. Although right
now, the box I am trying to put together is a Samba and NFS server for
mostly static web content.
In the not too distant future, a file server for IMAP/POP3 front ends. I
think postmark does a good job at simulating that.
Are there better benchmarks / methods of testing that would give a more
fair comparison that you know of? I know all benchmarks have many caveats,
but I am trying to approach this somewhat methodically. I am just about to
start another round of testing with nfs using multiple machines pounding
the one server. I was just going to run postmark on the 3 clients machines
(starting out at the same time).
Ultimately I dont give a toss if one is 10% or even 20% better than the
other. For that money, a few hundred dollars in RAM and CPU would change
that. We are mostly a BSD shop so I dont want to deploy a LINUX box for
25% faster disk I/O. But if the differences are far more acute, I need to
perhaps take a bit more notice.
> The read tests are less clear. iozone runs its read tests just after
> it runs its write tests. so filesystem syncing and write flushing is
> going to have a huge effect on the read numbers. I suspect that this
> is skewing the results across the spectrum. In particular, I don't
> see anywhere near the difference in cache-read performance between
> FreeBSD-5 and DragonFly. But I guess I'll have to load up a few test
> boxes myself and do my own comparisons to figure out what is going on.
>
> -Matt
More information about the freebsd-performance
mailing list