sched_lock && thread_lock()

Jeff Roberson jroberson at chesapeake.net
Wed May 23 22:56:45 UTC 2007


Resuming the original intent of this thread;

http://www.chesapeake.net/~jroberson/threadlock.diff

I have updated this patch to the most recent current.  I have included a 
scheduler called sched_smp.c that is a copy of ULE using per-cpu scheduler 
spinlocks.  There are also changes to be slightly more agressive with 
updating the td_lock pointer when it has been blocked.

This continues to be stable in testing by myself and Kris Kennaway on 1 to 
8 cpu machines.  Attilio is working on addressing concerns with the 
vmmeter diff.  It's my fault for not sending this around to arch@ before 
committing.  I apologize.  We will have one diff before threadlock goes in 
to fix rusage such that it doesn't depend on a gobal scheduler lock.  I 
will mail that here for review.  After that I intend to commit threadlock.

Please complain sooner rather than later!

Thanks,
Jeff

On Sun, 20 May 2007, Jeff Roberson wrote:

> Attilio and I have been working on addressing the increasing problem of 
> sched_lock contention on -CURRENT.  Attilio has been addressing the parts of 
> the kernel which do not need to fall under the scheduler lock and moving them 
> into seperate locks.  For example, the ldt/gdt lock and clock lock which were 
> committed earlier.  Also, using atomics for the vmcnt structure.
>
> I have been working on an approach to using thread locks rather than a global 
> scheduler lock.  The design is similar to Solaris's container locks, but the 
> details are different.  The basic idea is to have a pointer in the thread 
> structure that points at a spinlock that protects the thread.  This spinlock 
> may be one of the scheduler lock, a turnstile lock, or a sleep queue lock. 
> As the thread changes state from running to blocked on a lock or sleeping the 
> lock changes with it.
>
> This has several advantages.  The majority of the kernel simply calls 
> thread_lock() which figures out the details.  The kernel then knows nothing 
> of the particulars of the scheduler locks, and the schedulers are free to 
> implement them in any way that they like.  Furthermore, in some cases the 
> locking is reduced, because locking the thread has the side effect of locking 
> the container.
>
> This patch does not implement per-cpu scheduler locks.  It just changes the 
> kernel to support this model.  I have a fork of ULE in development that runs 
> with per-cpu locks, but it is not ready yet.  This means that there should be 
> very little change in system performance until the scheduler catches up.  In 
> fact, on a 2cpu system the difference is immeasurable or almost so on every 
> workload I have tested.  On an 8way opteron system the results vary between 
> +10% on some reasonable workloads and -15% on super-smack, which has some 
> inherent problems that I believe are not exposing real performance problems 
> with this patch.
>
> This has also been tested extensively by Kris and myself on a variety of 
> machines and I believe it to be fairly solid.  The only thing remaining to do 
> is fix rusage so that it does not rely on a global scheduler lock.
>
> I am posting the patch here in case anyone with specific knowledge of 
> turnstiles, sleepqueues, or signals would like to review it, and as a general 
> heads up to people interested in where the kernel is headed.
>
> This will apply to current just prior to my kern_clock.c commits.  I will 
> re-merge and update again in the next few days, probably after we sort out 
> rusage.
>
> http://people.freebsd.org/~jeff/threadlock.diff
>
> Thanks,
> Jeff
> _______________________________________________
> freebsd-arch at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
> To unsubscribe, send any mail to "freebsd-arch-unsubscribe at freebsd.org"
>


More information about the freebsd-arch mailing list