proposed smp_rendezvous change

Max Laier max at love2party.net
Tue May 17 16:20:46 UTC 2011


On 05/17/2011 05:16 AM, John Baldwin wrote:
...
> Index: kern/kern_switch.c
> ===================================================================
> --- kern/kern_switch.c (revision 221536)
> +++ kern/kern_switch.c (working copy)
> @@ -192,15 +192,22 @@
> critical_exit(void)
> {
> struct thread *td;
> - int flags;
> + int flags, owepreempt;
>
> td = curthread;
> KASSERT(td->td_critnest != 0,
> ("critical_exit: td_critnest == 0"));
>
> if (td->td_critnest == 1) {
> + owepreempt = td->td_owepreempt;
> + td->td_owepreempt = 0;
> + /*
> + * XXX: Should move compiler_memory_barrier() from
> + * rmlock to a header.
> + */

XXX: If we get an interrupt at this point and td_owepreempt was zero, 
the new interrupt will re-set it, because td_critnest is still non-zero.

So we still end up with a thread that is leaking an owepreempt *and* 
lose a preemption.

We really need an atomic_readandclear() which gives us a local copy of 
td_owepreempt *and* clears critnest in the same operation.  Sadly, that 
is rather expensive.  It is possible to implement with a flag for 
owepreempt, but that means that all writes to critnest must then be 
atomic.  Either because we know we have interrupts disabled (i.e. 
setting owepreempt can be a RMW), or with a proper atomic_add/set/...

I'm not sure what the performance impact of this will be.  One would 
hope that atomic_add without a memory barrier isn't much more expensive 
than a compiler generated read-modify-write, tho.  Especially, since 
this cacheline should be local and exclusive to us, anyway.

> + __asm __volatile("":::"memory");
> td->td_critnest = 0;
> - if (td->td_owepreempt) {
> + if (owepreempt) {
> td->td_critnest = 1;
> thread_lock(td);
> td->td_critnest--;


More information about the freebsd-current mailing list