[PATCH] Divorce critical sections from spin mutexes (round 2)
John Baldwin
jhb at FreeBSD.org
Fri Jan 14 13:24:10 PST 2005
Ok, in the process of updating my tree that held the earlier version of the
critical section vs. spin mutexes patch I think I have found and fixed the
bug that may have caused the lockups a few people reported. As such, I'd
like folks to test the updated patch. Details and such of what the patch
does:
- spin locks and critical sections are divorced. Specifically, the sole
purpose of a critical section is to keep the current thread from being
preempted until it exits the section. Nothing requires that the critical
section actually disable interrupts during the section as any interrupt
threads scheduled would simply not preempt either because they would be
picked up by another CPU or preempt the current thread when it exited the
critical section. However, spin locks do need to prevent themselves from
being interrupted by any code that can try to acquire a spin lock. Strictly
speaking, only spin mutexes used in interrupt context (sched_lock, icu_lock,
locks in INTR_FAST handlers, sleepq locks, etc.) need to block interrupts,
but if you have a mutex that is only used in top half code, you should
probably be using a normal mutex anyway, so the set of spin mutexes not used
in interrupt context tends to be small to empty. So far in SMPng, almost
all critical sections have been inside of spin mutexes (since spin mutexes
also need to block preemptions in addition to interrupts). Thus, for the
sake of simplicity, critical sections also included the interrupt blocking
behavior. (Keep in mind that this was an evolutionary process. :) However,
as SMPng progresses, it has now become useful to divorce the two concepts,
especially as some folks are working on locking schemes which just use
critical sections to protect per-CPU resources that are not accessed from
interrupt context. What this change does is to move the interrupt
blocking/deferment/whatever bits that spin mutexes need into a separate
spinlock_enter()/spinlock_exit() API completely implemented in MD code.
critical sections, on the other hand, are now reduced to a simple per-thread
nesting count and are now completely MI.
- The MI code that creates idle threads for each of the CPUs no longer tries
to set curthread up for the APs and no longer messes with the critnest count
for the idlethreads. Instead, the MD code now explicitly borrows the
idlethread context for the APs when it needs it and is responsible for
adjusting the critical section and spinlock nesting counts to account for
the weirdness of borrowing the context for the first context switch.
I've tested this on SMP i386, SMP sparc64, and UP alpha. Testing on other
archs and on SMP would be greatly appreciated. Patch is at
http://www.FreeBSD.org/~jhb/patches/spinlock.patch
--
John Baldwin <jhb at FreeBSD.org> <>< http://www.FreeBSD.org/~jhb/
"Power Users Use the Power to Serve" = http://www.FreeBSD.org
More information about the freebsd-current
mailing list