tcp_output starving -- is due to mbuf get delay?

Borje Josefsson bj at dc.luth.se
Fri Apr 11 07:07:53 PDT 2003


On Fri, 11 Apr 2003 15:58:55 +0200 Mattias Pantzare wrote:

> Terry Lambert wrote:
> > Mattias Pantzare wrote:
> > 
> >>>The products that Jeffrey Hsu and I and Alfred and Jon Mini
> >>>worked on at a previous company had no problems at all on a
> >>>1Gbit/S saturating the link, even through a VLAN trunk through
> >>>Cisco and one other less intelligent switch (i.e. two switches
> >>>and a VLAN trunk).
> >>
> >>A key factor here is that the testst where on a link with a 20ms
> >>round-tip time, and using a singel TCP connection. So the switches
> >>where in addition to a few routers on a 10Gbit/s network.
> > 
> > 
> > Sorry, but tis is not a factor.  If you think it is, then you
> > are running with badly tuned send and receive maximum window
> > sizes.
> > 
> > Latency = pool retention time = queue size
> 
> Then explain this, FreeBSD to FreeBSD on that link uses all CPU on the 
> sender, the reciver is fine, but performance is not. NetBSD to FreeBSD 
> fills the link (1 Gbit/s). On the same computers. MTU 4470. Send and 
> receive maximum windows where tuned to the same values on NetBSD and 
> FreeBSD.

I should add that I have tried with MTU 1500 also. Using NetBSD as sender 
works fine (just a little bit higher CPU load). When we tried MTU1500 with 
FreeBSD as sender, we got even lower performance.

Somebody else in this thread said that he had got full GE speed between 
two FreeBSD boxes connected back-to-back. I don't question that, but that 
doesn't prove anything. The problem arises when You are trying to do this 
long-distance and have to handle a large mbuf queue.

--Börje



More information about the freebsd-hackers mailing list