100,000 TCP connections - kernel tuning advice wanted

Subhro subhro.kar at gmail.com
Fri Sep 3 09:14:08 PDT 2004


netstat -m please

Regards
S.


On Fri, 3 Sep 2004 22:07:35 +1000, Simon Lai <simon at synatech.com.au> wrote:
> 
> Hi all,
> 
> As part of a team, I am working on a TCP multiplexor using FreeBSD.  On side A
> we have 100,000 TCP connections accepting packets, which are multiplexed
> onto a single TCP connection on Side B.  Packets going B->A are
> demultiplexed in the reverse way.  Info -
> 
> - freebsd version is 5.2-RELEASE. Kernel has been recompiled to
>  use DEVICE_POLLING and unused devices removed.  The
>  HZ parameter has been varied through 1000,2000,4000 but this
>  does not significantly alter our results.  We have also played with
>  the idle and trap sysctl's for polling.
> - our network card is an Intel EtherExpress Pro, running at 100Mbits
> - UDP is not an option for us
> - Average payload size is 50-100 bytes.  The payload is preceeded
>  by a 32 bit value, which is the size of the payload, so reading
>  is a matter of grabbing the size, allocating a buffer and then
>  doing the read.  Minimal processing is done on the packet.
> - We are using our own specialized memory management. We use writev and
>  readv whereever possible.
> - socket buffers have been increased to 1MB on the B side, but are the
>  default size on side A.
> - we are using kevent/kqueue - this task would be impossible without them
> - our current test box has 1.5GB RAM and a 1GHZ Athlon CPU.  While we might
>  go for a faster CPU, we would like to keep within our current RAM constraints.
> - Side A is connected to a test client, which has 20% idle time.
> - Side B is connected via a switch to another test box, which just echos the
>  packets back for testing purposes. It has significant idle time.
> - Our current rough measurements, using top, show 30% user time, and 60%
>  kernel time, when this app is running.  This multiplexing app is the only
>  app running on the machine.  The machine is CPU bound - the multiplexing
>  requires no disk I/O.
> 
> Currently we are getting 4000-6000 packets/sec unidirectional throughput,
> depending upon the mix of packet types/sizes.  This goes up to
> 5000-7000 packets/sec for 50,000 connections.
> 
> We are seeking advice on what kernel tunables we can tweak to improve
> packet throughput. Constants are TCP, 100,000 connections, 50-100 byte
> packet sizes.
> 
> All help appreciated.
> 
> Regs
> 
> Simon
> 
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
> 



-- 
Subhro Sankha Kar
School of Information Technology
Block AQ-13/1 Sector V
ZIP 700091
India


More information about the freebsd-questions mailing list