Why do we need to acquire the current thread's lock before context switching?

Dheeraj Kandula dkandula at gmail.com
Thu Sep 12 20:00:58 UTC 2013


Hey John,
       I think I get it now clearly.

The td_lock of each thread actually points to the Thread Queue's lock on
which it is present. i.e. run queue which may either be the real time runq,
timeshare runq or the idle runq. For sleep the td_lock points to the
blocked_lock which is a global lock protecting the sleep queue I think.

Before cpu_switch() is invoked, the old thread's td_lock is released as
shown below: the code is from sched_switch of sched_ule.c

 lock_profile_release_lock<http://nxr.netbsd.org/source/s?defs=lock_profile_release_lock&project=src-freebsd>
(&TDQ_LOCKPTR<http://nxr.netbsd.org/source/xref/src-freebsd/sys/kern/sched_ule.c#TDQ_LOCKPTR>
(tdq)->lock_object<http://nxr.netbsd.org/source/s?defs=lock_object&project=src-freebsd>
);

TDQ_LOCKPTR <http://nxr.netbsd.org/source/xref/src-freebsd/sys/kern/sched_ule.c#TDQ_LOCKPTR>(tdq)->mtx_lock
<http://nxr.netbsd.org/source/s?defs=mtx_lock&project=src-freebsd> =
(uintptr_t <http://nxr.netbsd.org/source/s?defs=uintptr_t&project=src-freebsd>)newtd
<http://nxr.netbsd.org/source/xref/src-freebsd/sys/kern/sched_ule.c#newtd>;


Later after cpu_switch is done,


lock_profile_obtain_lock_success
<http://nxr.netbsd.org/source/s?defs=lock_profile_obtain_lock_success&project=src-freebsd>(&TDQ_LOCKPTR
<http://nxr.netbsd.org/source/xref/src-freebsd/sys/kern/sched_ule.c#TDQ_LOCKPTR>(tdq)->lock_object
<http://nxr.netbsd.org/source/s?defs=lock_object&project=src-freebsd>,
0, 0, __FILE__ <http://nxr.netbsd.org/source/s?defs=__FILE__&project=src-freebsd>,
__LINE__ <http://nxr.netbsd.org/source/s?defs=__LINE__&project=src-freebsd>);


is executed which locks the lock of the thread queue on the current
CPU which can be on a different CPU. I assume the new thread's td_lock
points to the current CPU's thread queue.


Now it is clear that the mutex is unlocked by the same thread that locks it.


Hope my understanding is correct.


Dheeraj



On Thu, Sep 12, 2013 at 2:00 PM, Dheeraj Kandula <dkandula at gmail.com> wrote:

> Thanks John for the detailed clarification. Wow that is lot of
> information. I will digest it and will email you any further questions that
> I may have.
>
> Dheeraj
>
>
> On Thu, Sep 12, 2013 at 8:24 AM, John Baldwin <jhb at freebsd.org> wrote:
>
>> On Thursday, September 12, 2013 7:16:20 am Dheeraj Kandula wrote:
>> > Thanks a lot Svatopluk for the clarification. Right after I replied to
>> > Alfred's mail, I realized that it can't be thread specific lock as it
>> > should also protect the scheduler variables. So if I understand it
>> right,
>> > even though it is a mutex, it can be unlocked by another thread which is
>> > usually not the case with regular mutexes as the thread that locks it
>> must
>> > unlock it unlike a binary semaphore. Isn't it?
>>
>> It's less complicated than that. :)  It is a mutex, but to expand on what
>> Svatopluk said with an example: take a thread that is asleep on a sleep
>> queue.  td_lock points to the relevant SC_LOCK() for the sleep queue chain
>> in that case, so any other thread that wants to examine that thread's
>> state ends up locking the sleep queue while it examines that thread.  In
>> particular, the thread that is doing a wakeup() can resume all of the
>> sleeping threads for a wait channel by holding the one SC_LOCK() for that
>> wait channel since that will be td_lock for all those threads.
>>
>> In general mutexes are only unlocked by the thread that locks them,
>> and the td_lock of the old thread is unlocked during sched_switch().
>> However, the old thread has to grab td_lock of the new thread during
>> sched_switch() and then hand it off to the new thread when it resumes.
>> This is why sched_throw() and sched_switch() in ULE directly assign
>> 'mtx_lock' of the run queue lock before calling cpu_throw() or
>> cpu_switch().  That gives the effect that the new thread resumes while
>> holding the lock pinted to by its td_lock.
>>
>> > Dheeraj
>> >
>> >
>> > On Thu, Sep 12, 2013 at 7:04 AM, Svatopluk Kraus <onwahe at gmail.com>
>> wrote:
>> >
>> > > Think about td_lock like something what is lent by current thread
>> owner.
>> > > If a thread is running, it's owned by scheduler and td_lock points
>> > > to scheduler lock. If a thread is sleeping, it's owned by sleeping
>> queue
>> > > and td_lock points to sleep queue lock. If a thread is contested, it's
>> > > owned by turnstile queue and td_lock points to turnstile queue lock.
>> And so
>> > > on. This way an owner can work with owned threads safely without giant
>> > > lock. The td_lock pointer is changed atomically, so it's safe.
>> > >
>> > > Svatopluk Kraus
>> > >
>> > > On Thu, Sep 12, 2013 at 12:48 PM, Dheeraj Kandula <dkandula at gmail.com
>> >wrote:
>> > >
>> > >> Thanks a lot Alfred for the clarification. So is the td_lock
>> granular i.e.
>> > >> one separate lock for each thread but also used for protecting the
>> > >> scheduler variables or is it just one lock used by all threads and
>> the
>> > >> scheduler as well. I will anyway go through the code that you
>> suggested
>> > >> but
>> > >> just wanted to have a deeper understanding before I go about hunting
>> in
>> > >> the
>> > >> code.
>> > >>
>> > >> Dheeraj
>> > >>
>> > >>
>> > >> On Thu, Sep 12, 2013 at 3:10 AM, Alfred Perlstein <bright at mu.org>
>> wrote:
>> > >>
>> > >> > On 9/11/13 2:39 PM, Dheeraj Kandula wrote:
>> > >> >
>> > >> >> Hey All,
>> > >> >>
>> > >> >> When the current thread is being context switched with a newly
>> selected
>> > >> >> thread, why is the current thread's lock acquired before context
>> > >> switch –
>> > >> >> mi_switch() is invoked after thread_lock(td) is called. A thread
>> at any
>> > >> >> time runs only on one of the cores of a CPU. Hence when it is
>> being
>> > >> >> context
>> > >> >> switched it is added either to the real time runq or the timeshare
>> > >> runq or
>> > >> >> the idle runq with the lock still held or it is added to the sleep
>> > >> queue
>> > >> >> or
>> > >> >> the blocked queue. So this happens atomically even without the
>> lock.
>> > >> Isn't
>> > >> >> it? Am I missing something here? I don't see any contention for
>> the
>> > >> thread
>> > >> >> in order to demand a lock for the thread which will basically
>> protect
>> > >> the
>> > >> >> contents of the thread structure for the thread.
>> > >> >>
>> > >> >> Dheeraj
>> > >> >>
>> > >> >>
>> > >> > The thread lock also happens to protect various scheduler
>> variables:
>> > >> >
>> > >> >         struct mtx      *volatile td_lock; /* replaces sched lock
>> */
>> > >> >
>> > >> > see sys/kern/sched_ule.c on how the thread lock td_lock is changed
>> > >> > depending on what the thread is doing.
>> > >> >
>> > >> > --
>> > >> > Alfred Perlstein
>> > >> >
>> > >> >
>> > >> _______________________________________________
>> > >> freebsd-arch at freebsd.org mailing list
>> > >> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
>> > >> To unsubscribe, send any mail to "
>> freebsd-arch-unsubscribe at freebsd.org"
>> > >>
>> > >
>> > >
>> > _______________________________________________
>> > freebsd-arch at freebsd.org mailing list
>> > http://lists.freebsd.org/mailman/listinfo/freebsd-arch
>> > To unsubscribe, send any mail to "freebsd-arch-unsubscribe at freebsd.org"
>> >
>>
>> --
>> John Baldwin
>>
>
>


More information about the freebsd-arch mailing list