Updating our TCP and socket sysctl values...
brde at optusnet.com.au
Sun Mar 20 06:26:50 UTC 2011
On Sat, 19 Mar 2011, Jeff Roberson wrote:
> On Sat, 19 Mar 2011, Alexander Leidinger wrote:
>> On Sat, 19 Mar 2011 15:37:47 +0900 George Neville-Neil
>> <gnn at neville-neil.com> wrote:
>>> I believe it's time for us to upgrade our sysctl values for TCP
>>> sockets so that they are more in line with the modern world. At the
>>> moment we have these limits on our buffering:
>>> kern.ipc.maxsockbuf: 262144
>>> net.inet.tcp.recvbuf_max: 262144
>>> net.inet.tcp.sendbuf_max: 262144
>>> I believe it's time to up these values to something that's in line
>>> with higher speed local networks, such as 10G. Perhaps it's time to
>>> move these to 2MB instead of 256K.
All hard-coded limits are bogus. The same limit for a machine that has
8MB memory is nonense for a machine that has 8GB. In FreeBSD, AFAIK
only the vm system has _very_ good auto-tuning of parameters and limits
thanks to dyson's work 10-15 years ago. It has almost no user-settable
parameters or limits like the above.
>> I suggest to read
>> and do a before/after test to make sure we do not suffer from the
>> described problem. Jim Getty has test descriptions:
> Are they not talking about buffers in non-endpoint devices? Or perhaps even
> overly large rx queues in endpoints, but not local socket receive buffers?
> It seems that they are describing situations where excessive buffering masks
> network conditions until it's too late.
I don't know, but there is an mostly-unrelated bufferbloat problem that is
purely local. If you have a buffer that is larger than an Ln cache (or
about half than), then actually using just a single buffer of that size
guarantees thrashing of the Ln cache, so that almost every memory access
is an Ln cache miss. Even with current hardware, a buffer of size 256K
will thrash most L1 caches and a buffer of size a few MB will thrash most
More information about the freebsd-arch