sched_lock && thread_lock()
Jeff Roberson
jroberson at chesapeake.net
Thu May 24 03:23:58 UTC 2007
On Wed, 23 May 2007, Marcel Moolenaar wrote:
> On May 23, 2007, at 5:11 PM, Jeff Roberson wrote:
>
>>> pmap_switch() is called from cpu_switch() and from pmap_install().
>>> So, currently, pmap_install() grabs sched_lock to mimic the
>>> cpu_switch() path and we assert having sched_lock in pmap_switch().
>>> Basically, any lock that serializes cpu_switch() would work, because
>>> we don't want to switch the thread while in the middle of setting up
>>> the region registers.
>>
>> We could simply use thread_lock() now if this serialization only applies to
>> preventing multiple access to the same thread.
>
> Yes, looks like it.
>
>>>> There are a couple of these small issues that should be perfectly safe
>>>> that I was hoping to address outside of this patch so that it didn't get
>>>> too big.
>>>
>>> I noticed you introduced sched_throw(). Would it harm if ia64
>>> doesn't yet use sched_throw() and instead has the sequence it
>>> replaces? In other words: is the initial implementation of
>>> sched_throw() the same as the current code?
>>
>> The problem is that sched_throw() must acquire the correct scheduler lock
>> before entering cpu_throw(). That's why I moved it into the per-scheduler
>> code. sched_smp, which is the updated ule, acquires the correct lock for
>> the current cpu.
>
> Sounds like we want to keep ia64 in sync then. Please let me know
> before you commit if you found the time, motivation, whatever to
> include ia64 in the change or not. Either I want to test it or
> I want to fix it ;-)
I updated the patch at people.freebsd.org/~jeff/threadlock.diff
Can you try this on ia64 marcel? You can try with 4BSD and ULE. ULE may
not work if IPI_PREEMPT is not implemented however. I think it may be
missing there. I changed the locks and asserts in pmap.c and moved over
to the new sched_throw() interface.
The other change in this diff is that I moved thread_lock_flags() into
kern_mutex.c and fixed the thread locking loop so it is aware of td_lock
transitions before it's successfully acquired a lock that may no longer
be pointed at by the thread. This also removes the special case for
blocked_lock and simply does the equivalent of a trylock() spin until it
changes.
Thanks,
Jeff
>
> --
> Marcel Moolenaar
> xcllnt at mac.com
>
More information about the freebsd-arch
mailing list