tcp/udp performance

Thomas Herrlin junics-fbsdstable at atlantis.maniacs.se
Tue Sep 5 09:21:13 PDT 2006


Jack Vogel wrote:
> On 8/30/06, Danny Braniss <danny at cs.huji.ac.il> wrote:
>>
>> ever since 6.1 I've seen fluctuations in the performance of
>> the em (Intel(R) PRO/1000 Gigabit Ethernet).
>>
>>             motherboard                 OBN (On Board NIC)
>>             ----------------            ------------------
>>         1- Intel SE7501WV2S             Intel 82546EB::2.1
>>         2- Intel SE7320VP2D2            INTEL 82541
>>         3- Sun Fire X4100 Server        Intel(R) PRO/1000
>>
>> test 1: writing to a NetApp filer via NFS/UDP
>>            FreeBSD              Linux
>>                       MegaBytes/sec
>>         1- Average: 18.48       32.61
>>         2- Average: 15.69       35.72
>>         3- Average: 16.61       29.69
>> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in
>> speed of
>> around 60% on FreeBSD but none on Linux)
>>
>> test2: iperf using 1 as server:
>>                 FreeBSD(*)      Linux
>>                      Mbits/sec
>>         1-      926             905 (this machine was busy)
>>         2-      545             798
>>         3-      910             912
>>  *: did a 'sysctl net.inet.tcp.sendspace=65536'
>>
>>
>> So, it seems to me something is not that good in the UDP department, but
>> I can't find what to tweek.
>>
>> Any help?
>>
>>         danny
>
> Have discussed this some internally, the best idea I've heard is that
> UDP is not giving us the interrupt rate that TCP would, so we end up
> not cleaning up as often, and thus descriptors might not be as quickly
> available.. Its just speculation at this point.
If a high interrupt rate is a problem and your NIC+driver supports it,
then try enabling polling(4) aswell. This has helped me for bulk
transfers on slower boxes but i have noticed problems with ALTQ/dummynet
and other highly realtime dependent networking code. YMMV.
More info in the man 4 polling.
I think recent linux kernels/drivers have this implemented so it will
enable it dynamically on high load. However i only skimmed the documents
and i'm not a linux expert so i may be wrong on that.
/Junics
>
> Try this: the default is only to have 256 descriptors, try going for
> the MAX
> which is 4K.
>
> Cheers,
>
> Jack
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>



More information about the freebsd-net mailing list