sched_pin() versus PCPU_GET

mdf at FreeBSD.org mdf at FreeBSD.org
Fri Jul 30 13:44:01 UTC 2010


2010/7/30 Kostik Belousov <kostikbel at gmail.com>:
> On Thu, Jul 29, 2010 at 04:57:25PM -0700, mdf at freebsd.org wrote:
>> On Thu, Jul 29, 2010 at 4:39 PM,  <mdf at freebsd.org> wrote:
>> > We've seen a few instances at work where witness_warn() in ast()
>> > indicates the sched lock is still held, but the place it claims it was
>> > held by is in fact sometimes not possible to keep the lock, like:
>> >
>> >        thread_lock(td);
>> >        td->td_flags &= ~TDF_SELECT;
>> >        thread_unlock(td);
>> >
>> > What I was wondering is, even though the assembly I see in objdump -S
>> > for witness_warn has the increment of td_pinned before the PCPU_GET:
>> >
>> > ffffffff802db210:       65 48 8b 1c 25 00 00    mov    %gs:0x0,%rbx
>> > ffffffff802db217:       00 00
>> > ffffffff802db219:       ff 83 04 01 00 00       incl   0x104(%rbx)
>> >         * Pin the thread in order to avoid problems with thread migration.
>> >         * Once that all verifies are passed about spinlocks ownership,
>> >         * the thread is in a safe path and it can be unpinned.
>> >         */
>> >        sched_pin();
>> >        lock_list = PCPU_GET(spinlocks);
>> > ffffffff802db21f:       65 48 8b 04 25 48 00    mov    %gs:0x48,%rax
>> > ffffffff802db226:       00 00
>> >        if (lock_list != NULL && lock_list->ll_count != 0) {
>> > ffffffff802db228:       48 85 c0                test   %rax,%rax
>> >         * Pin the thread in order to avoid problems with thread migration.
>> >         * Once that all verifies are passed about spinlocks ownership,
>> >         * the thread is in a safe path and it can be unpinned.
>> >         */
>> >        sched_pin();
>> >        lock_list = PCPU_GET(spinlocks);
>> > ffffffff802db22b:       48 89 85 f0 fe ff ff    mov    %rax,-0x110(%rbp)
>> > ffffffff802db232:       48 89 85 f8 fe ff ff    mov    %rax,-0x108(%rbp)
>> >        if (lock_list != NULL && lock_list->ll_count != 0) {
>> > ffffffff802db239:       0f 84 ff 00 00 00       je     ffffffff802db33e
>> > <witness_warn+0x30e>
>> > ffffffff802db23f:       44 8b 60 50             mov    0x50(%rax),%r12d
>> >
>> > is it possible for the hardware to do any re-ordering here?
>> >
>> > The reason I'm suspicious is not just that the code doesn't have a
>> > lock leak at the indicated point, but in one instance I can see in the
>> > dump that the lock_list local from witness_warn is from the pcpu
>> > structure for CPU 0 (and I was warned about sched lock 0), but the
>> > thread id in panic_cpu is 2.  So clearly the thread was being migrated
>> > right around panic time.
>> >
>> > This is the amd64 kernel on stable/7.  I'm not sure exactly what kind
>> > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC.
>> >
>> > So... do we need some kind of barrier in the code for sched_pin() for
>> > it to really do what it claims?  Could the hardware have re-ordered
>> > the "mov    %gs:0x48,%rax" PCPU_GET to before the sched_pin()
>> > increment?
>>
>> So after some research, the answer I'm getting is "maybe".  What I'm
>> concerned about is whether the h/w reordered the read of PCPU_GET in
>> front of the previous store to increment td_pinned.  While not an
>> ultimate authority,
>> http://en.wikipedia.org/wiki/Memory_ordering#In_SMP_microprocessor_systems
>> implies that stores can be reordered after loads for both Intel and
>> amd64 chips, which would I believe account for the behavior seen here.
>
> Am I right that you suggest that in the sequence
>        mov     %gs:0x0,%rbx      [1]
>        incl    0x104(%rbx)       [2]
>        mov     %gs:0x48,%rax     [3]
> interrupt and preemption happen between points [2] and [3] ?
> And the %rax value after the thread was put back onto the (different) new
> CPU and executed [3] was still from the old cpu' pcpu area ?

Right, but I'm also asking if it's possible the hardware executed the
instructions as:

        mov     %gs:0x0,%rbx      [1]
        mov     %gs:0x48,%rax     [3]
        incl    0x104(%rbx)       [2]

On PowerPC this is definitely possible and I'd use an isync to prevent
the re-ordering.  I haven't been able to confirm that Intel/AMD
present such a strict ordering that no barrier is needed.

It's admittedly a very tight window, and we've only seen it twice, but
I have no other way to explain the symptom.  Unfortunately in the dump
gdb shows both %rax and %gs as 0, so I can't confirm that they had a
value I'd expect from another CPU.  The only thing I do have is
panic_cpu being different than the CPU at the time of
PCPU_GET(spinlock), but of course there's definitely a window there.

> I do not believe this is possible. CPU is always self-consistent. Context
> switch from the thread can only occur on the return from interrupt
> handler, in critical_exit() or such. This code is executing on the
> same processor, and thus should already see the effect of [2], that
> would prevent context switch.

Right, but if the hardware allowed reads to pass writes, then %rax
would have an incorrect value which would be saved at interrupt time,
and restored on another processor.

I can add a few sanity asserts to try to prove this one way or another
and hope they don't mess with the timing; this has only shown up when
testing with a hugely multi-threaded CIFS server.

The only reason I'm hammering at OOO execution being the explanation
is that it seems like the only way to explain the symptoms... unless I
prefer to believe that PCPU_GET is completely busted, which seems less
likely.

Thanks,
matthew


More information about the freebsd-hackers mailing list