tcp_output starving -- is due to mbuf get delay?
tlambert2 at mindspring.com
Fri Apr 11 09:24:20 PDT 2003
Borje Josefsson wrote:
> I should add that I have tried with MTU 1500 also. Using NetBSD as sender
> works fine (just a little bit higher CPU load). When we tried MTU1500 with
> FreeBSD as sender, we got even lower performance.
> Somebody else in this thread said that he had got full GE speed between
> two FreeBSD boxes connected back-to-back. I don't question that, but that
> doesn't prove anything. The problem arises when You are trying to do this
> long-distance and have to handle a large mbuf queue.
The boxes were not connected "back to back", they were connected
through three Gigabit switches and a VLAN trunk. But they were in
a lab, yes.
I'd be happy to try long distance for you, and even go so far as
to fix the problem for you, if you are willing to drop 10GBit fiber
to my house. 8-) 8-).
As far as a large mbuf queue, one thing that's an obvious difference
is SACK support; however, this can not be the problem, since the
NetBSD->FreeBSD speed is unafected (supposedly).
What is the FreeBSD->NetBSD speed?
Some knobs to try on FreeBSD:
net.inet.ip.intr_queue_maxlen -> 300
net.inet.ip.check_interface -> 0
net.inet.tcp.rfc1323 -> 0
net.inet.tcp.inflight_enable -> 1
net.inet.tcp.inflight_debug -> 0
net.inet.tcp.delayed_ack -> 0
net.inet.tcp.newreno -> 0
net.inet.tcp.slowstart_flightsize -> 4
net.inet.tcp.msl -> 1000
net.inet.tcp.always_keepalive -> 0
net.inet.tcp.sendspace -> 65536 (on sender)
Don't try them all at once and expect magic; you will probably need
Also, try recompiling your kernel *without* IPSEC support.
More information about the freebsd-performance