Arg. TCP slow start killing me.
Erich Weiler
weiler at soe.ucsc.edu
Sun Nov 13 21:16:40 UTC 2011
So, I have a FreeBSD 8.1 box that I'm using as a firewall (pfSense 2.0
really, which uses 8.1 as a base), and I'm filtering packets inbound and
I'm seeing a typical sawtooth pattern where I get high bandwidth, then a
packet drops somewhere, and the TCP connections back off a *lot*, then
slowly get faster, then backoff, etc. These are all higher latency WAN
connections.
I get an average of 1.5 - 2.0 Gb/s incoming, but I see it spike to like
3Gb/s every once in a while, then drop again. I'm trying to maintain
that 3Gb/s for as long as possible between it dropping.
Given that 8.1 does not have the more advanced TCP congestion algorithms
like cubic and H-TPC that might help that to some degree, I'm trying to
"fake it". ;)
My box has 24GB RAM on it. Is there some tunable I can set that would
effectively buffer incoming packets, even though the buffers would
eventually fill up, just to "delay" the TCP dropped packet signal
telling the hosts on the internet to back off? Like, could I
effectively buffer 10GB of packets in the queue before it sent the
backoff signal? Would setting kern.ipc.nmbclusters or something similar
help?
Right now I have:
loader.conf.local:
vm.kmem_size_max=12G
vm.kmem_size=10G
sysctl.conf:
kern.ipc.maxsockbuf=16777216
kern.ipc.nmbclusters=262144
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=8192
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=16384
I guess the goal is to keep the bandwidth high without dropoffs for as
long as possible, with out as many TCP resets on the streams.
Any help much appreciated! I'm probably missing a key point, but that's
why I'm posting to the list. ;)
cheers,
erich
More information about the freebsd-net
mailing list