svn commit: r211176 - in head/sys: amd64/amd64 i386/i386

Kostik Belousov kostikbel at gmail.com
Wed Aug 11 19:11:51 UTC 2010


On Wed, Aug 11, 2010 at 07:10:00PM +0200, Attilio Rao wrote:
> 2010/8/11 Attilio Rao <attilio at freebsd.org>:
> > 2010/8/11 Kostik Belousov <kostikbel at gmail.com>:
> >> On Wed, Aug 11, 2010 at 04:29:21PM +0200, Attilio Rao wrote:
> >>> 2010/8/11 Kostik Belousov <kostikbel at gmail.com>:
> >>> > On Wed, Aug 11, 2010 at 01:21:46PM +0200, Attilio Rao wrote:
> >>> >> 2010/8/11 Kostik Belousov <kostikbel at gmail.com>:
> >>> >> > On Wed, Aug 11, 2010 at 10:51:27AM +0000, Attilio Rao wrote:
> >>> >> >> Author: attilio
> >>> >> >> Date: Wed Aug 11 10:51:27 2010
> >>> >> >> New Revision: 211176
> >>> >> >> URL: http://svn.freebsd.org/changeset/base/211176
> >>> >> >>
> >>> >> >> Log:
> >>> >> >>   IPI handlers may run generally with interrupts disabled because they
> >>> >> >>   are served via an interrupt gate.
> >>> >> >>
> >>> >> >>   However, that doesn't explicitly prevent preemption and thread
> >>> >> >>   migration thus scheduler pinning may be necessary in some handlers.
> >>> >> >>   Fix that.
> >>> >> >
> >>> >> > How the preemption is supposed to happen ? Aside from the explicit
> >>> >> > calls to mi_switch() from e.g. critical_exit().
> >>> >>
> >>> >> IIRC it should be hardclock() willing to schedule the softclock(). It
> >>> >> is the critical_exit() in the thread_unlock() that may trigger it
> >>> >> (sorry for not digging more, it was a while back that I hacked this
> >>> >> part, but I guess you can verify on your own).
> >>> >> We already have other points within the kernel that take care of
> >>> >> dealing with preemption/migration like lapic_handle_timer(), for
> >>> >> example.
> >>> >
> >>> > Right, and if the interrupts are indeed disabled, I do not see how
> >>> > the preemption may be triggered in the fragments like
> >>> >        cpu = PCPU_GET(cpuid);
> >>> >        cpumask = PCPU_GET(cpumask);
> >>>
> >>> I don't recall all the details and I have no time to dig now. However,
> >>> also spinlock_enter() does disable explicitly preemption via
> >>> critical_enter() after have disabled the interrupts.
> >>> Let me CC jhb as he implemented spinlock_enter() and may have a clue
> >>> about how preemption can happen with interrupts disabled.
> >>
> >> spinlock_enter() disables preemption to handle the recursive
> >> calls to spinlock_enter/leave, I think, to prevent switch on
> >> leave.
> >
> > No.
> > Please look at how spinlock_enter() is implemented in ia32/amd64 in
> > order to see how it does handle recursion.
> 
> And besides we have other patterns running with interrupts disabled
> taking care of preemption as well (I think I already pointed to one, I
> think you could find others easilly).

Let me rephrase the original question: how the code of the kind
	a = b;
	c = d;
while executed with interrupts disabled, can be a subject to the kernel
preemption ? Well, the code are slightly more involved, because
evaluation of the right part of the assignment causes rebasing against
non-default segment register on x86oids, but the detail is irrelevant.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/svn-src-all/attachments/20100811/4f8377c4/attachment.pgp


More information about the svn-src-all mailing list