tcp_output starving -- is due to mbuf get delay?

Borje Josefsson bj at
Fri Apr 11 09:24:16 PDT 2003

On Fri, 11 Apr 2003 09:08:19 PDT Terry Lambert wrote:

> Mattias Pantzare wrote:
> > Terry Lambert wrote:
> > > Latency = pool retention time = queue size
> > 
> > Then explain this, FreeBSD to FreeBSD on that link uses all CPU on the
> > sender, the reciver is fine, but performance is not. NetBSD to FreeBSD
> > fills the link (1 Gbit/s). On the same computers. MTU 4470. Send and
> > receive maximum windows where tuned to the same values on NetBSD and
> > FreeBSD.
> I rather expect that the number of jumbogram buffers on FreeBSD is
> tiny and/or your MTU is not being properly negotiated between the
> endpoints, and you are fragging the bejesus out of your packets.

Both endpoints have MTU set to 4470, as have all the routers inbetween. 
"traceroute -n -Q 1 -q 1 -w 1 -f remotehost 4470" and netstat both reports 
> A good thing to look at at this point would be:
> 	o	Clean boot of FreeBSD target
> 	o	Run NetBSD against it
> 	o	Save statistics

What type of statistics do You mean?

> 	o	Clean boot of FreeBSD target
> 	o	Run FreeBSD against it
> 	o	Save statistics
> 	o	Compare saved statistics of NetBSD vs. FreeBSD
> 		against the target machine
> > And packet loss will affect the performance diffrently if you have a
> > large bandwith-latency product.
> You mean "bandwidth delay product".  Yes, assuming you have packet
> loss.  From your description of your setup, packet loss should not
> be possible, so we can discount it as a factor.

Of cause packet loss is possible on a nationwide network. If I loose a 
packet on the (expected) 10 second test (with NetBSD), recovering from 
that drops performance from 900+ to ~550 Mbps. Thos shows very clearly if 
I run "netstat 1".

> You may want to
> disable fast restart on the FreeBSD sender.

Which OID is that?

As a side note, I tried to set tcp.inflight_enable, but that made things 
much worse.


More information about the freebsd-performance mailing list