thread scheduling at mutex unlock

Daniel Eischen deischen at
Thu May 15 13:45:51 UTC 2008

On Thu, 15 May 2008, Andriy Gapon wrote:

> Or even more realistic: there should be a feeder thread that puts things on 
> the queue, it would never be able to enqueue new items until the queue 
> becomes empty if worker thread's code looks like the following:
> while(1)
> {
> pthread_mutex_lock(&work_mutex);
> while(queue.is_empty())
> 	pthread_cond_wait(...);
> //dequeue item
> ...
> pthread_mutex_unlock(&work mutex);
> //perform some short and non-blocking processing of the item
> ...
> }
> Because the worker thread (while the queue is not empty) would never enter 
> cond_wait and would always re-lock the mutex shortly after unlocking it.

Well in theory, the kernel scheduler will let both threads run fairly
with regards to their cpu usage, so this should even out the enqueueing
and dequeueing threads.

You could also optimize the above a little bit by dequeueing everything
in the queue instead of one at a time.

> So while improving performance on small scale this mutex re-acquire-ing 
> unfairness may be hurting interactivity and thread concurrency and thus 
> performance in general. E.g. in the above example queue would always be 
> effectively of depth 1.
> Something about "lock starvation" comes to mind.
> So, yes, this is not about standards, this is about reasonable expectations 
> about thread concurrency behavior in a particular implementation (libthr).
> I see now that performance advantage of libthr over libkse came with a price. 
> I think that something like queued locks is needed. They would clearly reduce 
> raw throughput performance, so maybe that should be a new (non-portable?) 
> mutex attribute.


More information about the freebsd-threads mailing list