sched_pin() versus PCPU_GET
mdf at FreeBSD.org
mdf at FreeBSD.org
Thu Jul 29 23:57:26 UTC 2010
On Thu, Jul 29, 2010 at 4:39 PM, <mdf at freebsd.org> wrote:
> We've seen a few instances at work where witness_warn() in ast()
> indicates the sched lock is still held, but the place it claims it was
> held by is in fact sometimes not possible to keep the lock, like:
>
> thread_lock(td);
> td->td_flags &= ~TDF_SELECT;
> thread_unlock(td);
>
> What I was wondering is, even though the assembly I see in objdump -S
> for witness_warn has the increment of td_pinned before the PCPU_GET:
>
> ffffffff802db210: 65 48 8b 1c 25 00 00 mov %gs:0x0,%rbx
> ffffffff802db217: 00 00
> ffffffff802db219: ff 83 04 01 00 00 incl 0x104(%rbx)
> * Pin the thread in order to avoid problems with thread migration.
> * Once that all verifies are passed about spinlocks ownership,
> * the thread is in a safe path and it can be unpinned.
> */
> sched_pin();
> lock_list = PCPU_GET(spinlocks);
> ffffffff802db21f: 65 48 8b 04 25 48 00 mov %gs:0x48,%rax
> ffffffff802db226: 00 00
> if (lock_list != NULL && lock_list->ll_count != 0) {
> ffffffff802db228: 48 85 c0 test %rax,%rax
> * Pin the thread in order to avoid problems with thread migration.
> * Once that all verifies are passed about spinlocks ownership,
> * the thread is in a safe path and it can be unpinned.
> */
> sched_pin();
> lock_list = PCPU_GET(spinlocks);
> ffffffff802db22b: 48 89 85 f0 fe ff ff mov %rax,-0x110(%rbp)
> ffffffff802db232: 48 89 85 f8 fe ff ff mov %rax,-0x108(%rbp)
> if (lock_list != NULL && lock_list->ll_count != 0) {
> ffffffff802db239: 0f 84 ff 00 00 00 je ffffffff802db33e
> <witness_warn+0x30e>
> ffffffff802db23f: 44 8b 60 50 mov 0x50(%rax),%r12d
>
> is it possible for the hardware to do any re-ordering here?
>
> The reason I'm suspicious is not just that the code doesn't have a
> lock leak at the indicated point, but in one instance I can see in the
> dump that the lock_list local from witness_warn is from the pcpu
> structure for CPU 0 (and I was warned about sched lock 0), but the
> thread id in panic_cpu is 2. So clearly the thread was being migrated
> right around panic time.
>
> This is the amd64 kernel on stable/7. I'm not sure exactly what kind
> of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC.
>
> So... do we need some kind of barrier in the code for sched_pin() for
> it to really do what it claims? Could the hardware have re-ordered
> the "mov %gs:0x48,%rax" PCPU_GET to before the sched_pin()
> increment?
So after some research, the answer I'm getting is "maybe". What I'm
concerned about is whether the h/w reordered the read of PCPU_GET in
front of the previous store to increment td_pinned. While not an
ultimate authority,
http://en.wikipedia.org/wiki/Memory_ordering#In_SMP_microprocessor_systems
implies that stores can be reordered after loads for both Intel and
amd64 chips, which would I believe account for the behavior seen here.
Thanks,
matthew
More information about the freebsd-hackers
mailing list