Listen queue overflow: N already in queue awaiting acceptance
andre at freebsd.org
Fri Jul 12 09:42:43 UTC 2013
On 12.07.2013 10:25, Gleb Smirnoff wrote:
> On Thu, Jul 11, 2013 at 05:43:09PM +0200, Andre Oppermann wrote:
> A> >> Andriy for example would never have found out about this problem other
> A> >> than receiving vague user complaints about aborted connection attempts.
> A> >> Maybe after spending many hours searching for the cause he may have
> A> >> interfered from endless scrolling in Wireshark that something wasn't
> A> >> right and blame syncache first. Only later it would emerge that he's
> A> >> either receiving too many connections or his application is too slow
> A> >> dealing with incoming connections.
> A> >
> A> > That's true, but OTOH there are many interesting network conditions like
> A> > excessive packet loss that we don't shout about. The stats are quietly gathered
> A> > and can be examined with netstat. If a system is properly monitored then such
> A> > counters are graphed and can trigger alarms. If the system just misbehaves then
> A> > an administrator can use netstat for inspection.
> A> > Spamming logs in the case of e.g. DDoS attack is not very helpful, IMO.
> A> I agree with that.
> A> I try to make the system behavior more transparent so that even "hidden" problems
> A> can be detected easily. This includes adding more of them, like excessive packet
> A> loss. This makes FreeBSD a more friendly platform for sysadmins whereas previously
> A> people may have quietly move on to some other OS due to such unspecific complications.
> A> Most of the TCP related debugging it is protected by net.inet.tcp.log_debug. In this
> A> case it's more complicated because the socket code where this happens is protocol
> A> agnostic and I can't bond it with TCP.
> A> I'm currently looking into a) applying a rate limiter to the message (as suggested
> A> by Luigi); and b) add a per-socket accept queue overflow statistic that is visible
> A> via netstat. I'll post patches for testing when done.
> What about the following generic idea: syslogd periodically queries the kernel
> about various error counters, and remembers the values. Those, that increased since
> previous query are logged.
> This can be implemented in different ways, either syslogd knows all the sysctls,
> or kernel "pushes" a list of values to syslogd. These are details to be discussed.
> What do you think about the plan itself?
I think it generally makes a lot of sense. It would be really good to have
a generic way of tracking the history of the various counters, not only error
counters, over longer periods with sufficient fine granularity. Tracking
individual socket statistics is probably not feasible due the large amount of
churn by connections coming and going all the time.
In telco equipment we have the ITU G.826/828 "performance" counters which
cover a longer period with varying granularity, like 1hr intervals over 7 days,
15 minute intervals over 24 hours, 5 minutes intervals over 4hrs, and 5 second
intervals over 15 minutes. In reality not all intervals may be available on
all systems but a base set is.
I've been dreaming of this ever since I worked a lot with telco equipment in
the late 90s and early 2000s. It made debugging persistent and intermittent
link and line problems so much easier because we could pinpoint what problem
happened at which point in time. The G.826/828 are mostly about link impairments
due to errors, with cascading counters for errored, severely errored, and
unavailable seconds based on various failure cases.
Taking your plan further I suggest the following:
a) Adding a new daemon that in various periodic intervals polls all static
kernel and interface counters and saves it in a human readable format.
For each configurable interval it calculates the delta between start and
end showing only the increases. For specific configurable error counters
it sends a notice to syslog.
b) We extend the interface counters by the standard RMON and other statistics
counters as found in the ieee802.1-3 documentation and supported by all
(ethernet) NICs for a long time already. With the upcoming ifnet work
I'm doing the stack-side foundations will be prepared with the updates
to the drivers being done individually.
c) For interfaces we introduce the notion of "availability" in the sense of
G.826/828 with the severity cascade. These calculations can be made by
the new time-series daemon.
More information about the freebsd-net