svn commit: r297039 - head/sys/x86/x86

Konstantin Belousov kostikbel at gmail.com
Tue Mar 29 08:45:16 UTC 2016


On Mon, Mar 28, 2016 at 11:55:18AM -0700, John Baldwin wrote:
> I think this is more to allow you to keep the TSCs in sync across cores
> more sanely by being able to adjust TSC_ADJ instead of trying to time
> a write to the TSC to apply an offset (which is racy).  If it was targeted
> at SMM, it wouldn't be exposed to the host OS.  I think Intel understands
> at this point that OS's want a synchronized TSC on bare metal for "cheap"
> timekeeping (at this point the TSC is more about that then about counting
> CPU "instructions").
There is RDTSCP instruction and IA32_TSC_AUX MSR, which provides atomic
fetch of the per-cpu TSC offset and provides automatic serialization.
Its use would avoid the need of LFENCE accompaining RDTSC, as it is done
now.

Hm, yes, IA32_TSC_ADJUST MSR is documented in latest SDM.

> 
> I think your patch later in the thread looks fine.
Thanks, committed as r297374.

> 
> Most of the worry about older hardware with variable TSC's later in the
> thread is becoming less and less relevant.  Intel hasn't shipped a CPU with
> a variable TSC in close to a decade now?
Core2 were initially released in 2006, and latest models in mobile segment
launched in the late 2009.  Core2 has invariant TSC, but it stops in C3,
which makes either TSC or C3 unusable (at least simultaneously).

> 
> Bruce's points about the hardcoded timeouts for things like mutxes are well
> founded.  I'm not sure about how best to fix those however.
I think we could use similar calibration with fake single-atomic loop.
It would give us 2x-5x times error at runtime, but again it does not
matter.


More information about the svn-src-head mailing list