thread scheduling at mutex unlock

Andriy Gapon avg at icyb.net.ua
Thu May 15 08:36:02 UTC 2008


on 15/05/2008 07:22 David Xu said the following:
> In fact, libthr is trying to avoid this conveying, if thread #1
> hands off the ownership to thread #2, it will cause lots of context
> switch, in the idea world, I would let thread #1 to run until it
> exhausts its time slice, and at the end of its time slices,
> thread #2 will get the mutex ownership, of course it is difficult to
> make this work on SMP, but on UP, I would expect the result will
> be close enough if thread scheduler is sane, so we don't raise
> priority in kernel umtx code if a thread is blocked, this gives
> thread #1 some times to re-acquire the mutex without context switches,
> increases throughput.

Brent, David,

thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex 
re-lock. You are right that waking waiters at unlock would hurt 
performance. But I think that it is not "fair" that at re-lock former 
owner gets the lock immediately and the thread that waited on it for 
longer time doesn't get a chance.

Here's a more realistic example than the mock up code.
Say you have a worker thread that processes queued requests and the load 
is such that there is always something on the queue. Thus the worker 
thread doesn't ever have to block waiting on it.
And let's say that there is a GUI thread that wants to convey some 
information to the worker thread. And for that it needs to acquire some 
mutex and "do something".
With current libthr behavior the GUI thread would never have a chance to 
get the mutex as worker thread would always be a winner (as my small 
program shows).
Or even more realistic: there should be a feeder thread that puts things 
on the queue, it would never be able to enqueue new items until the 
queue becomes empty if worker thread's code looks like the following:

while(1)
{
pthread_mutex_lock(&work_mutex);
while(queue.is_empty())
	pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(&work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never 
enter cond_wait and would always re-lock the mutex shortly after 
unlocking it.

So while improving performance on small scale this mutex re-acquire-ing 
unfairness may be hurting interactivity and thread concurrency and thus 
performance in general. E.g. in the above example queue would always be 
effectively of depth 1.
Something about "lock starvation" comes to mind.

So, yes, this is not about standards, this is about reasonable 
expectations about thread concurrency behavior in a particular 
implementation (libthr).
I see now that performance advantage of libthr over libkse came with a 
price. I think that something like queued locks is needed. They would 
clearly reduce raw throughput performance, so maybe that should be a new 
(non-portable?) mutex attribute.

-- 
Andriy Gapon


More information about the freebsd-threads mailing list