Tuning Gigabit
Craig Reyenga
craig at craig.afraid.org
Sun Jun 29 00:21:54 PDT 2003
From: "David Gilbert" <dgilbert at velocet.ca>
> >>>>> "Craig" == Craig Reyenga <craig at craig.afraid.org> writes:
>
> >> 300 megabit is about where 32bit 33Mhz PCI maxes out.
>
> Craig> Could you tell me a little more about your tests? What boards,
> Craig> and what configuration?
>
> Well... first of all, a 33Mhz 32-bit PCI bus can transfer 33M * 32
> bits ... which is just about 1 gigabit of _total_ PCI bus bandwidth.
> Consider that you're likely testing disk->RAM->NIC and you end up with
> 1/3 of that as throughput (minus bus overhead) so 300 megabit is a
> good number.
I should have mentioned that Iperf tests only linespeed with the options I
fed it. My 5400rpm disks can't even saturate a 100mbit line :(
[snip]
> Now some boards I've tested (like the nvidia chipset) are strangely
> limited to 100megabit. I can't explain this. It seems low no matter
> how you cut it.
As I mentioned in a previous email, this is horrible. Does this manifest
itself with disk controllers and other high-bandwidth devices?
>
> Our testing has been threefold:
>
> 1) Generating packets. We test the machines ability to generate both
> large (1500, 3000 and 9000 byte) and small (64 byte) packets. The
> large scale generation of packets is necessary for the other
> tests. So far, some packet flood utilities from the linux hacker
> camp are our most efficient small packet generators. netcat on
> memory cached objects or on /dev/zero generate our big packets.
>
> 2) Passing packets. Primarily, we're interested in routing. Passing
> packets, passing packets with 100k routes and passing packets with
> 100's of ipf accounting rules are our benchmarks. We look at both
> small and large packet performance. Packet passing machines have
> at least two interfaces ... but sometimes 3 or 4 are tested.
> Polling is a major win in the small packet passing race.
>
> 3) Receiving packets. netcat is our friend again here. Receiving
> packets doesn't appear to be the same level of challenge as
> generating or passing them.
>
> At any rate, we're clearly not testing file delivery. We sometimes
> play with file delivery as a first test ... or for other testing
> reasons. We've found several boards that corrupt packets when they
> pass more than 100megabit of packets. We havn't explained that one
> yet. Our tests centre on routing packets (because that's what we do
> with our high performance FreeBSD boxes. All our other FreeBSD boxes
> "just work" at the level of performance they have).
>
I look forward to seeing a paper on this; it would certainly assist people
in hardware purchase decisions.
[snip]
-Craig
More information about the freebsd-performance
mailing list