Recommendations for 10gbps NIC

Alexander V. Chernikov melifaro at FreeBSD.org
Sat Jul 27 08:02:40 UTC 2013


On 27.07.2013 02:14, Barney Cordoba wrote:
>
>
> ------------------------------------------------------------------------
> *From:* Daniel Feenberg <feenberg at nber.org>
> *To:* Alexander V. Chernikov <melifaro at FreeBSD.org>
> *Cc:* Barney Cordoba <barney_cordoba at yahoo.com>;
> "freebsd-net at freebsd.org" <freebsd-net at freebsd.org>
> *Sent:* Friday, July 26, 2013 4:59 PM
> *Subject:* Re: Recommendations for 10gbps NIC
>
>
> On Fri, 26 Jul 2013, Alexander V. Chernikov wrote:
>
>  > On 26.07.2013 19:30, Barney Cordoba wrote:
>  >>
>  >>
>  >> ------------------------------------------------------------------------
>  >> *From:* Alexander V. Chernikov <melifaro at FreeBSD.org
> <mailto:melifaro at FreeBSD.org>>
>  >> *To:* Boris Kochergin <spawk at acm.poly.edu <mailto:spawk at acm.poly.edu>>
>  >> *Cc:* freebsd-net at freebsd.org <mailto:freebsd-net at freebsd.org>
>  >> *Sent:* Thursday, July 25, 2013 2:10 PM
>  >> *Subject:* Re: Recommendations for 10gbps NIC
>  >>
>  >> On 25.07.2013 00:26, Boris Kochergin wrote:
>  >> > Hi.
>  >> Hello.
>  >> >
>  >> > I am looking for recommendations for a 10gbps NIC from someone who has
>  >> > successfully used it on FreeBSD. It will be used on FreeBSD
> 9.1-R/amd64
>  >> > to capture packets. Some desired features are:
>  >> >
>
> We have experience with HP NC523SFP and Chelsio N320E. The key difference
> among 10GBE cards for us is how they treat foreign DACs. The HP would PXE
> boot with several brands and generic DACs, but the Chelsio required a
> Chelsio brand DAC to PXE boot. There was firmware on the NIC to check the
> brand of cable. Both worked fine once booted. The Chelsio cables were hard
> to find, which became a problem. Also, when used with diskless Unix
> clients the Chelsio cards seemed to hang from time to time. Otherwise
> packet loss was one in a million for both cards, even with 7 meter cables.
>
> We liked the fact that the Chelsio cards were single-port and cheaper. I
> don't really understand why nearly all 10GBE cards are dual-port. Surely
> there is a market for NICs between 1 gigabit and 20 gigabit.
>
> The NIC heatsinks are too hot to touch during use unless specially cooled.
>
> Daniel Feenberg
> NBER
>
>
> ---------------------
> The same reason that they don't make single core cpus anymore. It costs
> about the
> same to make a 1 port chip as a 2 port chip.
>
> I find it interesting how so many talk about "the cards", when most
> often the
> differences are with "the drivers". Luigi made the most useful comment;
> if you ever
> want to use netmap, you need to buy a card compatible with netmap. Although
> you don't need netmap just to capture 10Gb/s. Forwarding, Maybe.
>
> I also find it interesting that nobody seems to have a handle on the
> performance
> differences. Obviously they're all different. Maybe substantially different.
It depends on what kind of performance you are talking about.
All NICs are capable of doing linerate RX/TX for both small/big packets.
The only notable exception I;m aware of are Intel 82598-based NICs which 
advertise PCI-E X8 gen2 with _2.5GT_ link speed, giving you maximum 
~14Gbit/s bw for 2 ports instead of 20.
>
> The x540 with RJ45 has the obvious advantage of being compatible with
> regular gigabit cards,
> and single port adapters are about $325 in the US.
>
> When cheap(er) 10g RJ45 switches become available, it will start to be
> used more and more.
> Very soon.
>
> BC
>



More information about the freebsd-net mailing list