increasing transmit speeds in WAN setting?

Ted Mittelstaedt tedm at
Thu Oct 19 06:38:35 UTC 2006

Hi Moses,

I know your not going to believe me but you are running into a
driver bug of some kind.  If you have a really high quality ethernet
switch with full management in it you can probably see it - login to
the switch and look at the port statistics. Cisco routers are designed
to sense for this and you will see it in their logs, they will issue the
error message "late collissions" or any decent hardware network
sniffer will show it.

The most common problem is the switch and network card aren't
properly negotiating duplex.  Another area is flow control on full
duplex being messed up, this is particularly critical on gigabit E.

The reason your getting good throughput on local connections is
that the layer 1 is simply continuing to retransmit until the packet
goes through, and the retransmissions are happening so fast that
you don't realize it.  That is also why latency is so heavily affecting

You can try several things.  First, temporarily try switching
over to a 10/100 card like an Intel EtherExpress Pro/100
if you have a PCI slot in the server.  If that works then your going
to have to try replacing your switch.  If you have a really good
switch you can try hard coding it's ports speed and duplex and
try the same on the server, and see if that does anything.

You also should be aware that many of the smaller and cheaper
gigabit switches do not have the ability to take sustained
gigabit ethernet speeds with back-to-back packets, their
internal processors aren't fast enough.  Once more, this is
a problem that won't show up on a local connection since the
retransmissions are so fast.


----- Original Message ----- 
From: "Moses Leslie" <marmoset at>
To: <freebsd-questions at>
Sent: Wednesday, October 18, 2006 10:31 PM
Subject: increasing transmit speeds in WAN setting?

> Hi,
> We're running 6.1-R, and are having difficulty getting decent speeds as
> latency increases.  The server is connected via gbit copper, and is gbit
> or better to the internet (depending on the path).
> For everything local, we're able to get what you'd expect (300+MBit
> without really any tuning).  However, when the latency is 60-80ms (IE
> across the US), we're unable to get better than around 300KB/s.
> It appears to be possibly related to the tcp.inflight stuff, but disabling
> it or messing with some of the related sysctls doesn't appear to help
> much.  Downloads often start quickly, but are then throttled back down to
> 300KB/s within 10 seconds or so.  We've changed the hz (100 to 10000), the
> net.inet.tcp.sendspace, kern.ipc.maxsockbuf, and tried different
> variations on the inflight tunables, but nothing has made a positive
> difference of more than ~20KB/s at best.
> If the server is running linux (2.6 kernel with default TCP settings), we
> can get much better speeds, 600-1000KB/s easily.  If we were going for
> time/distance records, we would try changing around tcp settings on the
> client, but we're trying to maximize performance for standard surfers who
> wouldn't know how to do that, so we're looking for anything that is server
> side only.
> We've been searching high and low for any tuning ideas but aren't able to
> find anything that's made a difference.  From looking at how the
> congestion stuff works in the source, it appears that something like:
> might be happening here, but we're kind of stabbing in the dark.
> Does anyone have any tuning ideas for 6.1 in a WAN setting?
> Thanks,
> Moses
> _______________________________________________
> freebsd-questions at mailing list
> To unsubscribe, send any mail to
"freebsd-questions-unsubscribe at"

More information about the freebsd-questions mailing list