Why do we need to acquire the current thread's lock before context switching?

Julian Elischer julian at freebsd.org
Fri Sep 13 04:10:05 UTC 2013


On 9/13/13 4:44 AM, Dheeraj Kandula wrote:
> # svn diff
> Index: sys/sys/proc.h
> ===================================================================
> --- sys/sys/proc.h (revision 255488)
> +++ sys/sys/proc.h (working copy)
> @@ -197,12 +197,44 @@
>   };
>
>   /*
> + * Comments by: Svatopluk Kraus & John Baldwin <jhb at freebsd.org>
> + *
> + * Svatopluk Kraus' comment:
> + * Think about td_lock like something what is lent by current thread
> owner. If
> + * a thread is running, it's owned by scheduler and td_lock points
> + * to scheduler lock. If a thread is sleeping, it's owned by sleeping queue
> + * and td_lock points to sleep queue lock. If a thread is contested, it's
> + * owned by turnstile queue and td_lock points to turnstile queue lock.
> And so
> + * on. This way an owner can work with owned threads safely without giant
> + * lock. The td_lock pointer is changed atomically, so it's safe.
> + *
> + * John Baldwin's comment:
> + * For example: take a thread that is asleep on a sleep
> + * queue.  td_lock points to the relevant SC_LOCK() for the sleep queue
> chain
> + * in that case, so any other thread that wants to examine that thread's
> + * state ends up locking the sleep queue while it examines that thread.  In
> + * particular, the thread that is doing a wakeup() can resume all of the
> + * sleeping threads for a wait channel by holding the one SC_LOCK() for
> that
> + * wait channel since that will be td_lock for all those threads.
> + *
> + * In general mutexes are only unlocked by the thread that locks them,
> + * and the td_lock of the old thread is unlocked during sched_switch().
> + * However, the old thread has to grab td_lock of the new thread during
> + * sched_switch() and then hand it off to the new thread when it resumes.
> + * This is why sched_throw() and sched_switch() in ULE directly assign
> + * 'mtx_lock' of the run queue lock before calling cpu_throw() or
> + * cpu_switch().  That gives the effect that the new thread resumes while
> + * holding the lock pinted to by its td_lock.
> + */
> +/*
>    * Kernel runnable context (thread).
>    * This is what is put to sleep and reactivated.
>    * Thread context.  Processes may have multiple threads.
>    */
>   struct thread {
> - struct mtx *volatile td_lock; /* replaces sched lock */
> + struct mtx *volatile td_lock; /* replaces sched lock. Look at the comment
> +    * above for further details.
> +                                            */
>    struct proc *td_proc; /* (*) Associated process. */
>    TAILQ_ENTRY(thread) td_plist; /* (*) All threads in this proc. */
>    TAILQ_ENTRY(thread) td_runq; /* (t) Run queue. */
>
>
>
> On Thu, Sep 12, 2013 at 4:21 PM, Alfred Perlstein <bright at mu.org> wrote:
>
>> Both these explanations are so great. Is there any way we can add this to
>> proc.h or maybe document somewhere and then link to it from proc.h?
>>
>> Sent from my iPhone
>>
>> On Sep 12, 2013, at 5:24 AM, John Baldwin <jhb at freebsd.org> wrote:
>>
>>> On Thursday, September 12, 2013 7:16:20 am Dheeraj Kandula wrote:
>>>> Thanks a lot Svatopluk for the clarification. Right after I replied to
>>>> Alfred's mail, I realized that it can't be thread specific lock as it
>>>> should also protect the scheduler variables. So if I understand it
>> right,
>>>> even though it is a mutex, it can be unlocked by another thread which is
>>>> usually not the case with regular mutexes as the thread that locks it
>> must
>>>> unlock it unlike a binary semaphore. Isn't it?
>>> It's less complicated than that. :)  It is a mutex, but to expand on what
>>> Svatopluk said with an example: take a thread that is asleep on a sleep
>>> queue.  td_lock points to the relevant SC_LOCK() for the sleep queue
>> chain
>>> in that case, so any other thread that wants to examine that thread's
>>> state ends up locking the sleep queue while it examines that thread.  In
>>> particular, the thread that is doing a wakeup() can resume all of the
>>> sleeping threads for a wait channel by holding the one SC_LOCK() for that
>>> wait channel since that will be td_lock for all those threads.
>>>
>>> In general mutexes are only unlocked by the thread that locks them,
>>> and the td_lock of the old thread is unlocked during sched_switch().
>>> However, the old thread has to grab td_lock of the new thread during
>>> sched_switch() and then hand it off to the new thread when it resumes.
>>> This is why sched_throw() and sched_switch() in ULE directly assign
>>> 'mtx_lock' of the run queue lock before calling cpu_throw() or
>>> cpu_switch().  That gives the effect that the new thread resumes while
>>> holding the lock pinted to by its td_lock.
                                    ^^ typo.. fix before commit
>>>
>>>> Dheeraj
>>>>
>>>>
>>>> On Thu, Sep 12, 2013 at 7:04 AM, Svatopluk Kraus <onwahe at gmail.com>
>> wrote:
>>>>> Think about td_lock like something what is lent by current thread
>> owner.
>>>>> If a thread is running, it's owned by scheduler and td_lock points
>>>>> to scheduler lock. If a thread is sleeping, it's owned by sleeping
>> queue
>>>>> and td_lock points to sleep queue lock. If a thread is contested, it's
>>>>> owned by turnstile queue and td_lock points to turnstile queue lock.
>> And so
>>>>> on. This way an owner can work with owned threads safely without giant
>>>>> lock. The td_lock pointer is changed atomically, so it's safe.
>>>>>
>>>>> Svatopluk Kraus
>>>>>
>>>>> On Thu, Sep 12, 2013 at 12:48 PM, Dheeraj Kandula <dkandula at gmail.com
>>> wrote:
>>>>>> Thanks a lot Alfred for the clarification. So is the td_lock granular
>> i.e.
>>>>>> one separate lock for each thread but also used for protecting the
>>>>>> scheduler variables or is it just one lock used by all threads and the
>>>>>> scheduler as well. I will anyway go through the code that you
>> suggested
>>>>>> but
>>>>>> just wanted to have a deeper understanding before I go about hunting
>> in
>>>>>> the
>>>>>> code.
>>>>>>
>>>>>> Dheeraj
>>>>>>
>>>>>>
>>>>>> On Thu, Sep 12, 2013 at 3:10 AM, Alfred Perlstein <bright at mu.org>
>> wrote:
>>>>>>> On 9/11/13 2:39 PM, Dheeraj Kandula wrote:
>>>>>>>
>>>>>>>> Hey All,
>>>>>>>>
>>>>>>>> When the current thread is being context switched with a newly
>> selected
>>>>>>>> thread, why is the current thread's lock acquired before context
>>>>>> switch –
>>>>>>>> mi_switch() is invoked after thread_lock(td) is called. A thread at
>> any
>>>>>>>> time runs only on one of the cores of a CPU. Hence when it is being
>>>>>>>> context
>>>>>>>> switched it is added either to the real time runq or the timeshare
>>>>>> runq or
>>>>>>>> the idle runq with the lock still held or it is added to the sleep
>>>>>> queue
>>>>>>>> or
>>>>>>>> the blocked queue. So this happens atomically even without the lock.
>>>>>> Isn't
>>>>>>>> it? Am I missing something here? I don't see any contention for the
>>>>>> thread
>>>>>>>> in order to demand a lock for the thread which will basically
>> protect
>>>>>> the
>>>>>>>> contents of the thread structure for the thread.
>>>>>>>>
>>>>>>>> Dheeraj
>>>>>>> The thread lock also happens to protect various scheduler variables:
>>>>>>>
>>>>>>>         struct mtx      *volatile td_lock; /* replaces sched lock */
>>>>>>>
>>>>>>> see sys/kern/sched_ule.c on how the thread lock td_lock is changed
>>>>>>> depending on what the thread is doing.
>>>>>>>
>>>>>>> --
>>>>>>> Alfred Perlstein
>>>>>> _______________________________________________
>>>>>> freebsd-arch at freebsd.org mailing list
>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
>>>>>> To unsubscribe, send any mail to "
>> freebsd-arch-unsubscribe at freebsd.org"
>>>> _______________________________________________
>>>> freebsd-arch at freebsd.org mailing list
>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
>>>> To unsubscribe, send any mail to "freebsd-arch-unsubscribe at freebsd.org"
>>> --
>>> John Baldwin
>>> _______________________________________________
>>> freebsd-arch at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
>>> To unsubscribe, send any mail to "freebsd-arch-unsubscribe at freebsd.org"
>>>
> _______________________________________________
> freebsd-arch at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-arch
> To unsubscribe, send any mail to "freebsd-arch-unsubscribe at freebsd.org"
>
>
>



More information about the freebsd-arch mailing list