tuning for high connection rates

Robert Watson rwatson at FreeBSD.org
Tue Dec 4 18:34:01 PST 2007


On Wed, 5 Dec 2007, Philipp Wuensche wrote:

> we are running a FreeBSD 7-BETA4 with SCHED_4BSD on a Intel Core2Dual E6600 
> 2.4GHz system for our bittorrent Opentracker.
>
> The system handles about 20Kpps (18Mbit/s) incoming and 15kpps (22 Mbit/s) 
> outgoing traffic serving 4000 connections/sec using TCP. The connections are 
> very short-living, all answered within one packet.
>
> You can find the system stats at 
> http://outpost.h3q.com/stalker/munin/opentracker/opentracker.html
>
> We are now running into some limits at peak time, system is up to 100% and 
> em0 takes about 80% on one CPU while the Opentracker software only takes 
> 10-15% CPU. The system is still responsible and answers all the requests, 
> but we are worried what will happen if the tracker grows at the current 
> rate.
>
> Currently we are out of ideas for tuning, so we kindly ask for ideas on 
> tuning the system to bring down the CPU usage from the em and the system CPU 
> usage. We tried tuning the em int_delay and abs_int_delay but without 
> success.

Could you show us the output from "top -S" left running for a few minutes in 
the steady state.

Could you try setting the sysctl net.isr.direct to 0, and see how that affects 
performance, CPU time reports, and "top -S" output?

Robert N M Watson
Computer Laboratory
University of Cambridge

>
> We have updated to the latest em driver:
>
> em0: <Intel(R) PRO/1000 Network Connection Version - 6.7.3> port
> 0x4000-0x401f mem 0xe8000000-0xe801ffff irq 16 at device 0.0 on pci13
> em0: Using MSI interrupt
> em0: Ethernet address: 00:30:48:92:06:5f
> em0: [FILTER]
>
> The debug output of em0 looks like this:
>
> em0: CTRL = 0x40140248 RCTL = 0x8002
> em0: Packet buffer = Tx=20k Rx=12k
> em0: Flow control watermarks high = 10240 low = 8740
> em0: tx_int_delay = 66, tx_abs_int_delay = 66
> em0: rx_int_delay = 32, rx_abs_int_delay = 66
> em0: fifo workaround = 0, fifo_reset_count = 0
> em0: hw tdh = 183, hw tdt = 183
> em0: hw rdh = 139, hw rdt = 139
> em0: Num Tx descriptors avail = 223
> em0: Tx Descriptors not avail1 = 6225
> em0: Tx Descriptors not avail2 = 3
> em0: Std mbuf failed = 0
> em0: Std mbuf cluster failed = 0
> em0: Driver dropped packets = 0
> em0: Driver tx dma failure in encap = 0
>
> We did some tuning already and our current sysctl.conf looks like this:
>
> kern.ipc.somaxconn=32768
> net.inet.icmp.icmplim=3000
> kern.ipc.maxsockets=300000
> net.inet.tcp.delayed_ack=1
> net.inet.tcp.finwait2_timeout=15000
> net.inet.tcp.fast_finwait2_recycle=1
> net.inet.tcp.maxtcptw=196607
> dev.em.0.rx_processing_limit=-1
>
> greetings,
> cryx
>
> _______________________________________________
> freebsd-performance at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe at freebsd.org"
>


More information about the freebsd-performance mailing list