thread scheduling at mutex unlock

David Schwartz davids at
Thu May 15 20:26:38 UTC 2008

> Brent, David,
> thank you for the responses.
> I think I incorrectly formulated my original concern.
> It is not about behavior at mutex unlock but about behavior at mutex
> re-lock. You are right that waking waiters at unlock would hurt
> performance. But I think that it is not "fair" that at re-lock former
> owner gets the lock immediately and the thread that waited on it for
> longer time doesn't get a chance.

You are correct, but fairness is not the goal, performance is. If you want
fairness, you are welcome to code it. But threads don't file union
grievances, and it would be absolute foolishness for a scheduler to
sacrifice performance to make threads happier.

The scheduler decides which thread runs, you decide what the running thread
does. You are expected to use your control over that latter part to exercise
whatever your application notion of "fairness" is.

Your test program is a classic example of a case where the use of a mutex is

> Here's a more realistic example than the mock up code.
> Say you have a worker thread that processes queued requests and the load
> is such that there is always something on the queue. Thus the worker
> thread doesn't ever have to block waiting on it.
> And let's say that there is a GUI thread that wants to convey some
> information to the worker thread. And for that it needs to acquire some
> mutex and "do something".
> With current libthr behavior the GUI thread would never have a chance to
> get the mutex as worker thread would always be a winner (as my small
> program shows).

Nonsense. The worker thread would be doing work most of the time and
wouldn't be holding the mutex.

> Or even more realistic: there should be a feeder thread that puts things
> on the queue, it would never be able to enqueue new items until the
> queue becomes empty if worker thread's code looks like the following:
> while(1)
> {
> pthread_mutex_lock(&work_mutex);
> while(queue.is_empty())
> 	pthread_cond_wait(...);
> //dequeue item
> ...
> pthread_mutex_unlock(&work mutex);
> //perform some short and non-blocking processing of the item
> ...
> }
> Because the worker thread (while the queue is not empty) would never
> enter cond_wait and would always re-lock the mutex shortly after
> unlocking it.

So what? The feeder thread could get the mutex after the mutex is unlocked
before the worker thread goes to do work. The only reason your test code
encountered a "problem" was because you yielded the CPU while you held the
mutex and never used up a timeslice.

> So while improving performance on small scale this mutex re-acquire-ing
> unfairness may be hurting interactivity and thread concurrency and thus
> performance in general. E.g. in the above example queue would always be
> effectively of depth 1.
> Something about "lock starvation" comes to mind.

Nope. You have to create a situation where the mutex is held much more often
than not held to get this behavior. That's a pathological case where the use
of a mutex is known to be inappropriate.

> So, yes, this is not about standards, this is about reasonable
> expectations about thread concurrency behavior in a particular
> implementation (libthr).
> I see now that performance advantage of libthr over libkse came with a
> price. I think that something like queued locks is needed. They would
> clearly reduce raw throughput performance, so maybe that should be a new
> (non-portable?) mutex attribute.

If you want queued locks, feel free to code them and use them. But you have
to work very hard to create a case where they are useful. If you find you're
holding the mutex more often than not, you're doing something *very* wrong.


More information about the freebsd-threads mailing list