About the memory barrier in BSD libc

Ricardo Nabinger Sanchez rnsanchez at wait4.org
Wed Apr 25 22:55:18 UTC 2012


On Mon, 23 Apr 2012 12:41:20 +0400, Slawa Olhovchenkov wrote:

> /usr/include/machine/atomic.h:
> 
> #define mb()    __asm __volatile("lock; addl $0,(%%esp)" : : : "memory")
> #define wmb()   __asm __volatile("lock; addl $0,(%%esp)" : : : "memory")
> #define rmb()   __asm __volatile("lock; addl $0,(%%esp)" : : : "memory")

Somewhat late on this topic, but I'd like to understand why issue a write 
on %esp, which would invalidate (%esp) on other cores --- thus forcing a 
miss on them?

Instead, why not issue "mfence" (mb), "sfence" (wmb), and "lfence" (rmb)?

Cheers

-- 
Ricardo Nabinger Sanchez           http://rnsanchez.wait4.org/
  "Left to themselves, things tend to go from bad to worse."



More information about the freebsd-threads mailing list