Test on 10GBE Intel based network card

Stefan Lambrev stefan.lambrev at moneybookers.com
Mon Aug 3 10:15:13 UTC 2009


Hi,

On Aug 3, 2009, at 12:53 PM, Invernizzi Fabrizio wrote:

> Hi
>
>
>> -----Original Message-----
>> From: Stefan Lambrev [mailto:stefan.lambrev at moneybookers.com]
>> Sent: lunedì 3 agosto 2009 11.22
>> To: Invernizzi Fabrizio
>> Cc: freebsd-performance at freebsd.org
>> Subject: Re: Test on 10GBE Intel based network card
>>
>> Hi,
>>
>> The limitation that you see is about the max number of packets that
>> FreeBSD can handle - it looks like your best performance is reached  
>> at
>> 64 byte packets?
>
> If you are meaning in term of Packet per second, you are right.  
> These are the packet per second measured during tests:
>
> 64 byte:        610119 Pps
> 512 byte:       516917 Pps
> 1492 byte:      464962 Pps
>
>
>> Am I correct that the maximum you can reach is around 639,000 packets
>> per second?
>
> Yes, as you can see the maximum is 610119 Pps.
> Where does this limit come from?

I duno - the tests I did before were with SYN packets (random source)  
which was my worst scenario,
and the server CPU were busy generating MD5 check sums for  
"syncache" (around 35% of the time).

If I have to compare my results with your, you beat me with factor  
2.5, may be because you use ICMP for the test
and your processor is better then my test stations :)
Also my experience is only with gigabit cards (em driver) and FreeBSD  
7.something_before_1 where the em thread was eating 100% cpu.
If you are lucky LOCK_PROFILING(9) will help you to see where the CPUs  
spend their time, if not you will see kernel panic :)
Once problematic locks identified they can be reworked, but I think  
the first part is already done
and work on the second already started.

In my experience increasing hw.em.rxd and hw.em.txd yelled better  
results, but I think ixgb already comes tuned by default
as it still doesn't have to support such a large number of different  
cards.
Also at the time of my tests there were not support for multi queues  
in the OS even if they were supported by the HW, which is changed in  
7.2 (?)


>
>> Also you are not routing the traffic, but instead the server handles
>> the requests itself and eat CPU to reply?
>
> Correct. In these first tests I want to "tune" the system, so I am  
> using the (let me say) worst scenario.
>
>
>>> _______________________________________________
>>> freebsd-performance at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>>> To unsubscribe, send any mail to "freebsd-performance-
>> unsubscribe at freebsd.org
>>> "
>>
>> --
>> Best Wishes,
>> Stefan Lambrev
>> ICQ# 24134177
>>
>>
>>
>>
>
>
> Questo messaggio e i suoi allegati sono indirizzati esclusivamente  
> alle persone indicate. La diffusione, copia o qualsiasi altra azione  
> derivante dalla conoscenza di queste informazioni sono rigorosamente  
> vietate. Qualora abbiate ricevuto questo documento per errore siete  
> cortesemente pregati di darne immediata comunicazione al mittente e  
> di provvedere alla sua distruzione, Grazie.
>
> This e-mail and any attachments is confidential and may contain  
> privileged information intended for the addressee(s) only.  
> Dissemination, copying, printing or use by anybody else is  
> unauthorised. If you are not the intended recipient, please delete  
> this message and any attachments and advise the sender by return e- 
> mail, Thanks.
>

--
Best Wishes,
Stefan Lambrev
ICQ# 24134177







More information about the freebsd-performance mailing list