Good gigabit NIC for 4.11?

ShouYan Mao symao at juniper.net
Sun Dec 25 19:19:34 PST 2005


I have setup a machine with the following configurations a few months =
ago:
(1) Xeon 1.8Ghz
(2) 1G DDR2 mem
(3) Two Intel 82543GC Gigabit card.

The machine works under bridge mode.
It can transfer at 300kpps and 1.2Gbit.
BTW, the machine has 1 PCI-X 133 bus. But 82543GC can only work at 64bit =
* 66Mhz. So the throughput result is at the maximum for 64bit * 66M =3D =
4G, with 60%, at 2.4G =3D 1.2G * 2.

If the throughput is above 1.2Gbit/s, the machine begins to drop =
packets. If the throughput is less than 1.2Gbit/s, it works well.

For big size packets, the bottleneck is at PCI bus, not CPU and memory.
If you select 82546 or other cards which can work at 64bit * 133Mhz, I =
think the result will be better.

If anyone have tried that, please let me know.


Shouyan
------------------------------------------------------------
I'm not the best, but I try to do better than last time.
------------------------------------------------------------

-----Original Message-----
From: owner-freebsd-net at freebsd.org =
[mailto:owner-freebsd-net at freebsd.org] On Behalf Of Bruce Evans
Sent: 2005=C4=EA12=D4=C226=C8=D5 11:04
To: Andre Oppermann
Cc: freebsd-net at freebsd.org; Matt Staroscik; Julian Elischer
Subject: Re: Good gigabit NIC for 4.11?

On Sat, 24 Dec 2005, Andre Oppermann wrote:

> Julian Elischer wrote:
>>
>> "."@babolo.ru wrote:
>>
>>>> I've been Googling up a storm but I am having trouble finding
>>>> recommendations for a good gigabit ethernet card to use with 4.11. =
The
>>>> Intel part numbers I found in the em readme are a few years old =
now, and
>>>> I can't quite determine how happy people are with other chipsets =
despite
>>>> my searches.
>>>>
>>>> I'm looking for a basic PCI 1-port card with jumbo frame support if
>>>> possible--I can live without it. Either way, stability is much more
>>>> important than performance.
>>>>
>>>>
>>> em for PCI32x33MHz works good up to 250Mbit/s, not more
>>> em for PCI64x66MHz works up to about 500Mbit/s without polling
>
> Please specify the packet size (distribution) you've got these numbers
> from.

sk and bge for PCI 33MHz under my version of an old version of FreeBSD
and significantly modified sk driver:
- nfs with default packet size gives 15-30MB/s on a file system where
   local r/w gives 51-53MB/s.  Strangely, tcp is best for writing
   (30MB/s vs 19 vor udp) and worst for reading (15MB/s vs 23).
- sk to bge packet size 5 using ttcp -u: 1.1MB/s 240kpps (2% lost).
   Either ttcp or sk must be modified to avoid problems with ENOBUFS.
- sk to bge packet size 1500 using ttcp -u: 78MB/s 53.4kpps (0% lost).
- sk to bge packet size 8192 using ttcp -u: [panic].  Apparently I got
   bad bits from -current or mismerged them.
- bge to sk packet size 5 using ttcp -u: 1.0MB/s 208kpps (0% lost).
   Different problems with ENOBUFS -- unmodified ttcp spins so test
   always takes 100% CPU.
- bge to sk packet size 1500 using ttcp -u: [bge hangs]

> You have to be careful here.  Throughput and packets per second are =
not
> directly related.  Throughput is generally limited by good/bad =
hardware
> and DMA speed.  My measurements show that with decent hardware (em(4) =
and
> bge(4) on PCI-X/133MHz) you can easily run at full wirespeed of 1 =
gigabit
> per second with 1500 bytes per packet as the CPU only has to handle =
about
> 81,000 packets per second.  All processing like forwarding, =
firewalling and

PCI/33MHz apparently can't do "only" 81000 non-small packets/sec.

> routing table lookups are done once per packet no matter how large it =
is.
> So at wirespeed with 64 bytes packets you've got to do this 1.488 =
million
> times per second.  This is a bit harder and entirely CPU bound.  With =
some
> mods and fastforward we've got em(4) to do 714,000 packets per second =
on
> my Opteron 852 with PCI-X/133.  Hacking em(4) to m_free() the packets =
just
> before they would hit the stack I see that the hardware is capable of
> receiving full wirespeed at 64 byte packets.

I have timestamps which show that my sk (a Yukon-mumble, whatever is
on an A7N8X-E) can't do more than the measured 240kpps.  Once the ring
buffer is filled up, it takes about 4 usec per packet (typically 1767
usec for 480 packets) to send the packets.  I guess it spends the
entire 4 usec talking to the PCI bus and perhaps takes several cycles
setting up transactions.

Bruce
_______________________________________________
freebsd-net at freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"


More information about the freebsd-net mailing list