tcp_output starving -- is due to mbuf get delay?

Sam Leffler sam at errno.com
Thu Apr 10 15:23:17 PDT 2003


> > I guess I overlooked something after applying the patch (attached):
> >
> > Apr 10 12:11:52 ncs /kernel: sbappend: bad tail 0x0xc209a200 instead of
0x0xc243
> > 6c00
> > Apr 10 12:11:52 ncs /kernel: sbappend: bad tail 0x0xc2436c00 instead of
0x0xc238
> > bf00
> > Apr 10 12:11:52 ncs /kernel: sbappend: bad tail 0x0xc238bf00 instead of
0x0xc243
> > f300
> > ...
> >
> > A large number of such message was added into /var/log/message. This
> > indicates either bad patch code or something I changed in the patch
> > to make it work in 4.8 (attached).
> >
> > Any thought?
>
> That's likely the sign that the patch isn't appending to the tail of
> the list correctly.  Doing a tail append where the tail is known
> should be an O(1) operation and should make adding an mbuf to a
> cluster faster.  Right now it has to do a linear scan to append data,
> iirc, which is likely _a_ cause of some performance degradation.  I'm
> not an mbuf expert, but I wonder how free mbuf's are identified.
> Regardless, I'll see if I can't figure out where this problem is with
> the patch, it should do nothing but make things faster.

If this is a repeat of Jason Thorpe's tail pointer optimization for
sbappend; the patch may have started from one I did.  I never committed it
because I could never reproduce the performance gains he saw.  I attributed
it to a difference between netbsd and freebsd's TCP window setup algorithms.

My patch for -stable (now probably very out of date) is in
http://www.freebsd.org/~sam/thorpe-stable.patch.

I haven't been following this thread closely, but FWIW I routinely get ~700
Mb/s running netperf between two -stable machines connected by a cross-over
cable.  Each machine has an Intel PRO/1000 card (em driver); 32-bit PCI in
one machine and 64-bit in the other, but I've gotten similar performance
with 32-bit PCI on both sides.  As others have noted you need to watch out
for "environmental factors" in understanding performance.  Aside from
hardware issues (there are many), be especially warry of IRQ entropy
harvesting.

    Sam



More information about the freebsd-hackers mailing list