SMP and setrunnable()- scheduler 4bsd
Julian Elischer
julian at elischer.org
Thu Jul 10 12:56:32 PDT 2003
On Thu, 10 Jul 2003, John Baldwin wrote:
> > 307.504u 93.581s 4:23.22 152.3% 3047+5913k 29+1055io 8pf+0w
> >
> > What is so stunning is the massive increase in user time
> > for the case where the cpu is not being idled.
> > I'm hoping this is a statistical artifact of some sort..
>
> I don't think it is, but you'd need more samples to be truly confident.
> One possible reason: having the CPU's not halt means that idle CPU's
> bang on the runq state continuously. Perhaps this can penalize the
> non-idle CPU's due to cache interactions both when the non-idle CPU's
> are manipulating the queues and also by making the cache lines holding
> the queue state always be resident and not allowing their effective use
> by the real code executing on other CPUs.
possibly the cpu continuously testing
sched_runnable() is interfering with such things as
the clock ticks that want to account user time.
By making them a lot slower (schedlock/giant)? the
user time is being 'extended'.
I think I see more *Giant in 'top' when the cpu is not halted then when
it is.
> > I'll do some tests.
>
> Yes. As it stands now, adding the IPI would just make things more
> complex for no gain. However, if this IPI is present, then we can
> engage in perhaps more drastic measures like really putting a CPU
> to sleep (perhaps disabling interrupts to it?) until it is needed
> which might bring significant power and heat savings to idle SMP
> machines.
check the patch at http://www.freebsd.org/~julian/it.diff
it's trivial.
BTW in cpu_idle()
#ifdef SMP
if (mp_grab_cpu_hlt())
return;
#endif
whta gain is there in this returning.. it will anyhow if there is work
to do, and sched_runnable is called either way..
couldn't it just be
#ifdef SMP
mp_grab_cpu_hlt();
#endif
?
>
> > It seems however that having the halt on idle turned on is the
> > right thing these days. (which is the current default)
> > but the odd user times are a worry.
>
> I'm sure Terry is all torn up by that conclusion. :-P
>
:-)
More information about the freebsd-current
mailing list