svn commit: r242910 - in user/andre/tcp_workqueue/sys: kern sys

Maxim Sobolev sobomax at FreeBSD.org
Mon Dec 3 07:41:34 UTC 2012


Hi Alfred and Andre,

It's nice somebody takes care of this. Default settings pretty much 
sucks on any off-the-shelf PC hardware in the last 5 years.

We are also in quite mbufs hungry environment, is's not 10GigE, but we 
are dealing with forwarding voice traffic, which consists of 
predominantly very small packets (20-40 bytes). So we have a lot of 
small packets in-flight, which uses a lot of MBUFS.

What however happens, the network stack consistently lock up after we 
put more than 16-18MB/sec onto it, which corresponds to about 350-400 Kpps.

This is way lower than any nmbclusters/maxusers limits we have (1.5m/1500).

With half of that critical load right now we see something along those 
lines:

66365/71953/138318/1597440 mbuf clusters in use (current/cache/total/max)
149617K/187910K/337528K bytes allocated to network (current/cache/total)

Machine has 24GB of ram.

vm.kmem_map_free: 24886267904
vm.kmem_map_size: 70615040
vm.kmem_size_scale: 1
vm.kmem_size_max: 329853485875
vm.kmem_size_min: 0
vm.kmem_size: 24956903424

So my question is whether there are some other limits that can cause 
MBUFS starvation if the number of allocated clusters grows to more than 
200-250k? I am curious how it works in the dynamic system - since no 
memory is pre-allocated for MBUFS, what happens if the network load 
increases gradually while the system is running? Is it possible to get 
to ENOMEM eventually with all memory already taken for other pools?

Mem: 6283M Active, 12G Inact, 3760M Wired, 754M Cache, 2464M Buf, 504M Free
Swap: 40G Total, 6320K Used, 40G Free

Any pointers/suggestions are greatly appreciated.

-Maxim


More information about the svn-src-user mailing list