A more general possible meltdown/spectre countermeasure

Eric McCorkle eric at metricspace.net
Fri Jan 5 04:05:43 UTC 2018


I've thought more about how to deal with meltdown/spectre, and I have an
idea I'd like to put forward.  However, I'm still in something of a
panic mode, so I'm not certain as to its effectiveness.  Needless to
say, I welcome any feedback on this, and I may be completely off-base.

I'm calling this a "countermeasure" as opposed to a "mitigation", as
it's something that requires modification of code as opposed to a
drop-in patch.

== Summary ==

Provide a kernel and userland API by which memory allocation can be done
with extended attributes.  In userland, this could be accomplished by
extending MMAP flags, and I could imagine a malloc-with-attributes flag.
 In kernel space, this must already exist, as drivers need to allocate
memory with various MTRR-type attributes set.

The immediate aim here is to store sensitive information that must
remain memory-resident in non-cacheable memory locations (or, if more
effective attribute combinations exist, using those instead).  See the
rationale for the argument why this should work.

Assuming the rationale holds, then the attack surface should be greatly
reduced.  Attackers would need to grab sensitive data out of stack
frames or similar locations if/when it gets copied there for faster use.
 Moreover, if this is done right, it could dovetail nicely into a
framework for storing and processing sensitive assets in more secure
hardware[0] (like smart cards, the FPGAs I posted earlier, or other
options).

The obvious downside is that you take a performance hit storing things
in non-cacheable locations, especially if you plan on doing heavy
computation in that memory (say, encryption/decryption).  However, this
is almost certainly going to be less than the projected 30-50%
performance hit from other mitigations.  Also, this technique should
work against spectre as well as meltdown (assuming the rationale holds).

The second downside is that you have to modify code for this to work,
and you have to be careful not to keep copies of sensitive information
around too long (this gets tricky in userland, where you might get
interrupted and switched out).


[0]: Full disclosure, enabling open hardware implementations of this
kind of thing is something of an agenda of mine.

== Rationale ==

(Again, I'm tired, rushed, and somewhat panicked so my logic could be
faulty at any point, so please point it out if it is)

The rationale for why this should work relies on assumptions about
out-of-order pipelines that cannot be guaranteed to hold, but are
extremely likely to be true.

As background, these attacks depend on out-of-order execution performing
operations that end up affecting cache and branch-prediction state,
ultimately storing information about sensitive data in these
side-channels before the fault conditions are detected and acted upon.
I'll borrow terminology from the paper, using "transient instructions"
to refer to speculatively executed instructions that will eventually be
cancelled by a fault.

These attacks depend entirely on transient instructions being able to
get sensitive information into the processor core and then perform some
kind of instruction on them before the fault condition cancels them.
Therefore, anything that prevents them from doing this *should* counter
the attack.  If the actual sensitive data never makes it to the core
before the fault is detected, the dependent memory accesses/branches
never get executed and the data never makes it to the side-channels.

Another assumption here is that CPU architects are going to want to
squash faulted instructions ASAP and stop issuing along those
speculative branches, so as to reclaim execution units.  So I'm assuming
once a fault comes back from address translation, then transient
execution stops dead.

Now, break down the cases for whether the address containing sensitive
data is in cache and TLB or not.  (I'm assuming here that caches are
virtually-indexed, which enables cache lookups to bypass address
translation.)

* In cache, in TLB: You end up basically racing between the cache and
TLB, which will very likely end up detecting the fault before the data
arrives, but at the very worst, you get one or two cycles of transient
instruction execution before the fault.

* In cache, not in TLB: Virtually-indexed tagged means you get a cache
lookup racing a page-table walk.  The cache lookup beats the page table
walk by potentially hundreds (maybe thousands) of cycles, giving you a
bunch of transient instructions before a fault gets triggered.  This is
the main attack case.

* Not in cache, in TLB: Memory access requires address translation,
which comes back almost immediately as a fault.

* Not in cache, not in TLB: You have to do a page table walk before you
can fetch the location, as you have to go out to physical memory (and
therefore need a physical address).  The page table walk will come back
with a fault, stopping the attack.

So, unless I'm missing something here, both non-cached cases defeat the
meltdown attack, as you *cannot* get the data unless you do address
translation first (and therefore detect faults).

As for why this defeats the spectre attack, the logic is similar: you've
jumped into someone else's executable code, hoping to scoop up enough
information into your branch predictor before the fault kicks you out.
However, to capture anything about sensitive information in your
side-channels, the transient instructions need to actually get it into
the core before a fault gets detected.  The same case analysis as above
applies, so you never actually get the sensitive info into the core
before a fault comes back and you get squashed.


[1]: A physically-indexed cache would be largely immune to this attack,
as you'd have to do address translation before doing a cache lookup.


I have some ideas that can build on this, but I'd like to get some
feedback first.


More information about the freebsd-security mailing list