svn commit: r334197 - head/sys/kern
Rick Macklem
rmacklem at uoguelph.ca
Fri May 25 13:31:46 UTC 2018
Andriy Gapon wrote:
>On 25/05/2018 04:15, Mateusz Guzik wrote:
>> Author: mjg
>> Date: Thu May 25 23:58:57 2018
>> New Revision: 334197
>> URL: https://svnweb.freebsd.org/changeset/base/334197
>>
>> Log:
>> Implement Mostly Exclusive locks.
>>
>> High lock contention is one of the biggest scalability bottlenecks on
>> multicore systems. Although the real fix consists of making lock
>> consumers better suited for multicore operation, locking primitive
>> performance is still a crucial factor in real-world systems.
>>
>> It turns out that a lot of the locking is overzealous - the lock is
>> held longer than it needs to be and sometimes even completely
>> unnecessarily in the first place. Even when lock is needed in
>> principle, all relevant consumers may be only *reading* the state
>> protected by the lock or modifying disjoint parts of protected data.
>>
>> As such, a lot of the locking can be elided.
>>
>> The idea boils down to trying to take the lock for a limited amount of
>> time and in case of failure pretending we got it anyway. The approach
>> along with practical analysis of performance win/new crash ratio is
>> described in "Towards reliability-oblivious low latency locking" by
>> Leland Oller.
>
>Great change! Looking forward to new crashes!
>:-)
I think that this should be enabled via a sysctl. Maybe something like:
kern.enable_windows_reliability
rick
> Modified:
> head/sys/kern/kern_mutex.c
>
> Modified: head/sys/kern/kern_mutex.c
> ===================================================================
> --- sys/kern/kern_mutex.c (revision 334196)
> +++ sys/kern/kern_mutex.c (working copy)
> @@ -557,6 +557,9 @@ __mtx_lock_sleep(volatile uintptr_t *c, uintptr_t
> lda.spin_cnt++;
> #endif
> #ifdef ADAPTIVE_MUTEXES
> + if (lda.spin_cnt > 16384)
> + break;
> +
> /*
> * If the owner is running on another CPU, spin until the
> * owner stops running or the state of the lock changes.
> @@ -1020,16 +1023,22 @@ __mtx_unlock_sleep(volatile uintptr_t *c, uintptr_
> turnstile_chain_lock(&m->lock_object);
> _mtx_release_lock_quick(m);
> ts = turnstile_lookup(&m->lock_object);
> - MPASS(ts != NULL);
> - if (LOCK_LOG_TEST(&m->lock_object, opts))
> - CTR1(KTR_LOCK, "_mtx_unlock_sleep: %p contested", m);
> - turnstile_broadcast(ts, TS_EXCLUSIVE_QUEUE);
> -
> /*
> - * This turnstile is now no longer associated with the mutex. We can
> - * unlock the chain lock so a new turnstile may take it's place.
> + * We failed to previously grab the lock. Unlock fast path brings
> + * us here thinking there are blocked threads, but there may be
> + * none
> */
> - turnstile_unpend(ts, TS_EXCLUSIVE_LOCK);
> + if (ts != NULL) {
> + if (LOCK_LOG_TEST(&m->lock_object, opts))
> + CTR1(KTR_LOCK, "_mtx_unlock_sleep: %p contested", m);
> + turnstile_broadcast(ts, TS_EXCLUSIVE_QUEUE);
> +
> + /*
> + * This turnstile is now no longer associated with the mutex. We can
> + * unlock the chain lock so a new turnstile may take it's place.
> + */
> + turnstile_unpend(ts, TS_EXCLUSIVE_LOCK);
> + }
> turnstile_chain_unlock(&m->lock_object);
> }
>
P.S.
I had to re-read the commit message twice, the actual change three times and to
check the calendar five times.
--
Andriy Gapon
More information about the svn-src-all
mailing list