svn commit: r252032 - head/sys/amd64/include

Bruce Evans brde at optusnet.com.au
Tue Jun 25 14:54:01 UTC 2013


On Tue, 25 Jun 2013, Gleb Smirnoff wrote:

> On Mon, Jun 24, 2013 at 11:16:33PM +1000, Bruce Evans wrote:
> B> > K> This is quite interesting idea, but I still did not decided if it
> B> > K> acceptable.  The issue is that we could add the carry to the other
> B> > K> processor counter, if the preemption kicks in at right time between
> B> > K> two instructions.  I did not found any argument why would it be
> B> > K> wrong, the races for fetch operation seems to be the same with either
> B> > K> local update method.
> B> >
> B> > This would be wrong since update isn't locked. Thus, if we are put on
> B> > other CPU between two instructions, and in second instruction updating
> B> > another CPU counter simultaneously with the original CPU we were on,
> B> > then we are losing an update.
> B>
> B> Hmm, this is subtle.  The update is never lost, but is applied to
> B> different counter, non-atomically at a later time.  Non-atomicity
> B> only matters when there is a carry since the counter only goes
> B> transiently backards in this case.  For example: initial state:
> B>
> B>      CPU1 counter:  00000000 ffffffff
> B>      CPU2 counter:  00000000 fffffffe
> B>
> B> Start adding 1 to the first counter, doing it non-atomically by
> B> incrementing the low word first.
> B>
> B>      CPU1 counter:  00000000 00000000  (carry in CPU1 eflags)
> B>      CPU2 counter:  00000000 fffffffe
> B>
> B> Get preempted at this point:
> B>
> B>      CPU1 counter:  00000000 00000000
> B>      CPU2 counter:  00000000 fffffffe  (carry in CPU2 eflags)
>
> Nope, the non-atomicity isn't specific to the case when we carry
> bit from least significant part to most one.

Sure, but the other parts already work correctly.

> The non-atomicity is much simplier: two CPUs write to same memory address
> w/o using lock prefix.

No, they can't, since these are per-CPU counters.  Except for broken
per-CPU counters and invalid accesses by CPUs that don't own the counter
in upper layer code.

> Thus, any update mechanism that first calculates absolute per-CPU address,
> then writes to it w/o lock, requires critical section. Because preemption
> between calculation and write, and later execution on other CPU leads
> to two CPUs writing to same address.

Only broken per-CPU counters do that.  I already pointed out that all
arches except amd64 and i386 have buggy PCPU_ADD() and other PCPU_*()
access functions because they do do that.  ia64, powerpc and sparc64
even document that that they do this in an XXX comment.  Actually,
ia64 and sparc64 and perhaps some others are not quite this bad, since
their pcpu pointer is in a special register that should get switched
by context switches.  It still takes a lot of magic for the compiler
to use this register for the accesses and not copy it to another
register that doesn't get context switched.  Their pcpu access bugs
may be limited to the compiler possibly generating non-atomic
load-modify-store instructions.

Their XXX comment has wording bugs either way.  It says that "this"
operation should be made atomic, but the scope of the comment is 4
operations.  The first operation is PCPU_ADD(), and it used to be the
only one that does load-modify-store.  Now it is followed by the
PCPU_INC() operation which also does load-modify-store.  Aligned word-sized
operations have a chance of being atomic enough for the other operations.

Bruce


More information about the svn-src-head mailing list