scheduler (sched_4bsd) questions

Stephan Uphoff ups at tree.com
Thu Sep 30 11:30:56 PDT 2004


On Thu, 2004-09-30 at 10:17, John Baldwin wrote:
> Fair enough.  The right place to fix this is in turnstile_unpend() though I 
> think.  I have had these patches that try to "clump" setrunqueue's before 
> preempting lying around (but not thoroughly tested yet) that might fix this 
> as well but in the turnstile code itself:
- snip -
> --- //depot/projects/smpng/sys/kern/subr_turnstile.c	2004/09/03 14:14:21
> +++ //depot/user/jhb/preemption/kern/subr_turnstile.c	2004/09/10 21:36:10
> @@ -727,6 +726,7 @@
>  	 * in turnstile_wait().  Set a flag to force it to try to acquire
>  	 * the lock again instead of blocking.
>  	 */
> +	critical_enter();
>  	while (!TAILQ_EMPTY(&pending_threads)) {
>  		td = TAILQ_FIRST(&pending_threads);
>  		TAILQ_REMOVE(&pending_threads, td, td_lockq);
> @@ -742,6 +742,7 @@
>  			MPASS(TD_IS_RUNNING(td) || TD_ON_RUNQ(td));
>  		}
>  	}
> +	critical_exit();
>  	mtx_unlock_spin(&sched_lock);
>  }
-snip -
> 
> I.e., you could just move the critical_enter() in subr_turnstile.c earlier so 
> it is before the mtx_unlock_spin() of the turnstile chain lock.

I agree - this would be the right place.
I was originally planning to do some more work in kern_mutex and did not
want to touch more than one file ;-)
Can you check this in?

Your other patches look like they are targeted to avoid senseless
switching to improve performance - but should not have an impact on
correct function. Right ?
Hopefully I get some time to look at them more closely later on.


	Stephan







More information about the freebsd-arch mailing list