thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC

Konstantin Belousov kostikbel at
Wed Nov 14 07:21:20 UTC 2012

On Wed, Nov 14, 2012 at 01:41:04AM +0100, Markus Gebert wrote:
> On 13.11.2012, at 19:30, Markus Gebert <markus.gebert at> wrote:
> > To me it looks like the unix socket GC is triggered way too often and/or running too long, which uses cpu and worse, causes a lot of contention around the unp_list_lock which in turn causes delays for all processes relaying on unix sockets for IPC.
> > 
> > I don't know why the unp_gc() is called so often and what's triggering this.
> I have a guess now. Dovecot and relayd both use unix sockets heavily. According to dtrace uipc_detach() gets called quite often by dovecot closing unix sockets. Each time uipc_detach() is called unp_gc_task is taskqueue_enqueue()d if fds are inflight.
> in uipc_detach():
> 682		if (local_unp_rights)	
> 683			taskqueue_enqueue(taskqueue_thread, &unp_gc_task);
> We use relayd in a way that keeps the source address of the client when connecting to the backend server (transparent load balancing). This requires IP_BINDANY on the socket which cannot be set by unprivileged processes, so relayd sends the socket fd to the parent process just to set the socket option and send it back. This means an fd gets transferred twice for every new backend connection.
> So we have dovecot calling uipc_detach() often and relayd making it likely that fds are inflight (unp_rights > 0). With a certain amount of load this could cause unp_gc_task to be added to the thread taskq too often, slowing everything unix socket related down by holding global locks in unp_gc().
> I don't know if the slowdown can even cause a negative feedback loop at some point by inreasing the chance of fds being inflight. This would explain why sometimes the condition goes away by itself and sometimes requires intervention (taking load away for a moment).
> I'll look into a way to (dis)prove all this tomorrow. Ideas still welcome :-).

If the only issue is indeed too aggressive scheduling of the taskqueue,
than the postpone up to the next tick could do it. The patch below
tries to schedule the taskqueue for gc to the next tick if it is not yet
scheduled. Could you try it ?

diff --git a/sys/kern/subr_taskqueue.c b/sys/kern/subr_taskqueue.c
index 90c6ffc..3bf62f9 100644
--- a/sys/kern/subr_taskqueue.c
+++ b/sys/kern/subr_taskqueue.c
@@ -252,9 +252,13 @@ taskqueue_enqueue_timeout(struct taskqueue *queue,
 		} else {
 			timeout_task->f |= DT_CALLOUT_ARMED;
+			if (ticks < 0)
+				ticks = -ticks; /* Ignore overflow. */
+		}
+		if (ticks > 0) {
+			callout_reset(&timeout_task->c, ticks,
+			    taskqueue_timeout_func, timeout_task);
-		callout_reset(&timeout_task->c, ticks, taskqueue_timeout_func,
-		    timeout_task);
 	return (res);
diff --git a/sys/kern/uipc_usrreq.c b/sys/kern/uipc_usrreq.c
index cc5360f..ed92e90 100644
--- a/sys/kern/uipc_usrreq.c
+++ b/sys/kern/uipc_usrreq.c
@@ -131,7 +131,7 @@ static const struct sockaddr	sun_noname = { sizeof(sun_noname), AF_LOCAL };
  * reentrance in the UNIX domain socket, file descriptor, and socket layer
  * code.  See unp_gc() for a full description.
-static struct task	unp_gc_task;
+static struct timeout_task unp_gc_task;
  * The close of unix domain sockets attached as SCM_RIGHTS is
@@ -672,7 +672,7 @@ uipc_detach(struct socket *so)
 	if (vp)
 	if (local_unp_rights)
-		taskqueue_enqueue(taskqueue_thread, &unp_gc_task);
+		taskqueue_enqueue_timeout(taskqueue_thread, &unp_gc_task, -1);
 static int
@@ -1783,7 +1783,7 @@ unp_init(void)
-	TASK_INIT(&unp_gc_task, 0, unp_gc, NULL);
+	TIMEOUT_TASK_INIT(taskqueue_thread, &unp_gc_task, 0, unp_gc, NULL);
 	TASK_INIT(&unp_defer_task, 0, unp_process_defers, NULL);
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
URL: <>

More information about the freebsd-stable mailing list