network performance

Stefan Lambrev stefan.lambrev at
Wed Feb 6 01:05:33 PST 2008


Kris Kennaway wrote:
> Stefan Lambrev wrote:
>> Hello,
>> Kris Kennaway wrote:
>>> Stefan Lambrev wrote:
>>>>>>> Thanks for investigating this. One thing to note is that ip 
>>>>>>> flows from
>>>>>>> the same connection always go down the same interface, this is 
>>>>>>> because
>>>>>>> Ethernet is not allowed to reorder frames. The hash uses
>>>>>>> src-mac, dst-mac, src-ip and dst-ip (see lagg_hashmbuf), make 
>>>>>>> sure when
>>>>>>> performance testing that your traffic varies in these values. 
>>>>>>> Adding
>>>>>>> tcp/udp ports to the hashing may help.
>>>>>> The traffic, that I generate is with random/spoofed src part, so 
>>>>>> it is split between interfaces for sure :)
>>>>>> Here you can find results when under load from hwpmc and 
>>>>>> lock_profiling:
>>> OK, this shows the following major problems:
>>>     39     22375065      1500649     5690741     3     0       
>>> 119007      712359 /usr/src/sys/net/route.c:147 (sleep mutex:radix 
>>> node head)
>>>     21      3012732      1905704     1896914     1     1        
>>> 14102      496427 /usr/src/sys/netinet/ip_output.c:594 (sleep 
>>> mutex:rtentry)
>>>     22          120      2073128          47     2 44109            
>>> 0           3 
>>> /usr/src/sys/modules/if_lagg/../../net/ieee8023ad_lacp.c:503 
>>> (rw:if_lagg rwlock)
>>>     39     17857439      4262576     5690740     3     0        
>>> 95072     1484738 /usr/src/sys/net/route.c:197 (sleep mutex:rtentry)
>>> It looks like the if_lagg one has been fixed already in 8.0, it 
>>> could probably be backported but requires some other infrastructure 
>>> that might not be in 7.0.
>>> The others are to do with concurrent transmission of packets (it is 
>>> doing silly things with route lookups).  kmacy has a WIP that fixes 
>>> this.  If you are interested in testing an 8.0 kernel with the fixes 
>>> let me know.
>> Well those servers are only for tests so I can test everything, but 
>> at some point I'll have to make final decision what to use in 
>> production :)
> is a sys/ tarball from my p4 
> branch, which includes these and other optimizations.
Just downloaded them - will patch my system and test today.
>>>>> I forget this file :)
>>>> I found that MD5Transform aways uses ~14% (with rx/txcsum enabled 
>>>> or disabled).
>>> Yeah, these don't have anything to do with MD5.
>> Well I didn't find from where MD5Transform() is called, so I guess 
>> it's a some 'magic', that I still do not understand ;)
> MD5Transform is an internal function called by other MD5* functions. 
> Check netinet/tcp_syncache.c
Well now I understand why I see the only on the final delivery host and 
not on the firewall :)
>>> It is probably from the syncache.  You could disable it 
>>> (net.inet.tcp.syncookies_only) if you don't need strong protection 
>>> against SYN flooding.
>>> Kris
>> How the server perform during SYN flooding is exactly what I test at 
>> the moment :)
>> So I can't disable this.
> I thought this trace was on the machine you are transmitting the SYNs 
> from, perhaps I misunderstood.
The first traces when we discussed hping was from the machine that is 
transmitting the SYNs.
Now I'm on the next step where I'm trying to survive the SYN flood. 
That's why lagg + lacp sounds intriguing for me,
because em driver is not really SMPable, but if I the traffic is split 
between two or more network cards, then I'll be able to utilize two or 
more CPUs.
>> Just for information, if someone is interested - I looked how linux 
>> (2.6.22-14-generic ubuntu) perform in the same situation .. by 
>> default it doesn't perform at all - it hardly replays to 100-200 
>> packets/s,
>> with syncookies enabled it can handle up to 70-90,000 pps 
>> (250-270,000 compared to freebsd), but the server is very loaded and 
>> not very responsible.
>> Of course this doesn't mean that FreeBSD can't perform better ;)
> What do you mean "compared to freebsd"?
I mean that the same hardware when running Linux is able to survive when 
bombed with 70-90kpps, and when running FreeBSD it can survive 250-270kpps
Of course I'm using some default values for this linux distro, so to 
make the comparison fair, I'll try to tune and linux too.


Best Wishes,
Stefan Lambrev
ICQ# 24134177

More information about the freebsd-performance mailing list