svn commit: r202889 - head/sys/kern

M. Warner Losh imp at bsdimp.com
Tue Jan 26 20:48:22 UTC 2010


In message: <3023270A-755A-4BCF-AC9A-C1F290052279 at mac.com>
            Marcel Moolenaar <xcllnt at mac.com> writes:
: 
: On Jan 26, 2010, at 12:09 PM, M. Warner Losh wrote:
: > cpu_switch(struct thread *old, struct thread *new, struct mutext *mtx)
: > {
: > 	/* Save the registers to the pcb */
: > 	old->td_lock = mtx;
: > #if defined(SMP) && defined(SCHED_ULE)
: > 	/* s/long/int/ if sizeof(long) != sizeof(void *) */
: > 	/* as we have no 'void *' version of the atomics */
: > 	while (atomic_load_acq_long(&new->td_lock) == (long)&blocked_lock)
: > 		continue;
: > #endif
: > 	/* Switch to new context */
: > }
: 
: Ok. So this is what ia64 has already, except for the atomic_load()
: in the while loop. Since td_lock is volatile, I don't think we need
: atomic_load(). To be explicit, ia64 has:
: 
: 		old->td_lock = mtx;
: #if defined(SCHED_ULE) && defined(SMP)
: 		/* td_lock is volatile */
: 		while (new->td_lock == &blocked_lock)
: 			;
: #endif
: 
: Am I right, or am I missing a critical aspect of using atomic load?

The Atomic load acq also has a memory barrier after the item is
fetched from memory.

: > I also think that we should have that code somewhere for reference.
: 
: Since ia64 has a C implementation of cpu_switch(), we could make
: that the reference implementation?

Most likely :)

Warner


More information about the svn-src-all mailing list