pthread_{mutex,cond} & fifo/starvation/scheduling policy

Bernard van Gastel bvgastel at bitpowder.com
Thu Jan 21 10:27:36 UTC 2010


In real world application such a proposed queue would work almost always, but I'm trying to exclude all starvation situations primarily (speed is less relevant). And although such a worker can execute it work and be scheduled fairly, the addition of the work to the queue can result in starvation (one of the threads trying to add to the queue could stall forever if the lock is heavily contested).

Is this possible with POSIX thread stuff? Or is the only option to use IPC like message queues for this?

Regards,
	Bernard

Op 19 jan 2010, om 19:46 heeft Dan Nelson het volgende geschreven:

> In the last episode (Jan 19), Bernard van Gastel said:
>> I'm curious to the exact scheduling policy of POSIX threads in relation to
>> mutexes and conditions.  If there are two threads (a & b), both with the
>> following code:
>> 
>> while (1) {
>> 	pthread_mutex_lock(mutex);
>> 	...
>> 	pthread_mutex_unlock(mutex);
>> }
>> 
>> What is the scheduling policy of the different thread libraries? Are both
>> threads getting an equal amount of time?  Are there no starvation issues
>> (are they executed in alternating turns)?  (a test program of mine
>> indicates that libpthread and libthr both have starvation issues, in
>> contrary to Mac OS X 10.6)
> 
> There's no guarantee of fairness when dealing with mutexes afaik.  My guess
> is that if thread "a" unlocks the mutex and still has time left in its
> quantum, it'll be able to grab it again without even going to the kernel. 
> From the POSIX docs on mutexes:
> 
>  http://www.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_lock.html#tag_16_439_08
> 
>  "Mutex objects are intended to serve as a low-level primitive from which
>   other thread synchronization functions can be built.  As such, the
>   implementation of mutexes should be as efficient as possible, and this
>   has ramifications on the features available at the interface.
> 
>   The mutex functions and the particular default settings of the mutex
>   attributes have been motivated by the desire to not preclude fast,
>   inlined implementations of mutex locking and unlocking."
> 
> The idea being that mutexes should be held for as little time as possible. 
> Is there a way to write your code so that most of the CPU-consuming activity
> is done outside of the mutex?  Perhaps use a job queue of some sort, and
> only lock the mutex when pushing/popping elements.  Then worker processes
> can run without holding the mutex, and will be fairly scheduled by the
> kernel.
> 
> -- 
> 	Dan Nelson
> 	dnelson at allantgroup.com



More information about the freebsd-hackers mailing list