sem_post() performance

John Baldwin jhb at freebsd.org
Mon Oct 13 21:58:39 UTC 2014


On Sunday, September 21, 2014 11:37:42 PM Jilles Tjoelker wrote:
> It has been reported that POSIX semaphores are slow, in contexts such as
> Python. Note that POSIX semaphores are the only synchronization objects
> that support use by different processes in shared memory; this does not
> work for mutexes and condition variables because they are pointers to
> the actual data structure.
> 
> In fact, sem_post() unconditionally performs an umtx system call.
> 
> To avoid both lost wakeups and possible writes to a destroyed semaphore,
> an uncontested sem_post() must check the _has_waiters flag atomically
> with incrementing _count.
> 
> The proper way to do this would be to take one bit from _count and use
> it for the _has_waiters flag; the definition of SEM_VALUE_MAX permits
> this. However, this would require a new set of umtx semaphore operations
> and will break ABI of process-shared semaphores (things may break if an
> old and a new libc access the same semaphore over shared memory).

Have you thought more about pursuing this option?  I think there was a general 
consensus from earlier in the thread to just break the ABI (at least adjust 
SEM_MAGIC to give some protection) and fix it.

> This diff only affects 32-bit aligned but 64-bit misaligned semaphores
> on 64-bit systems, and changes _count and _has_waiters atomically using
> a 64-bit atomic operation. It probably needs a may_alias attribute for
> correctness, but <sys/cdefs.h> does not have a wrapper for that.

It does have one bug:

> +			if (atomic_cmpset_rel_64((uint64_t *)&sem->_kern._count,
> +			    oldval, newval))

This needs to be '&_has_waiters'.  Right now it changes _count and _flags,
but not _has_waiters.

-- 
John Baldwin


More information about the freebsd-threads mailing list