[RFC] ASLR Whitepaper and Candidate Final Patch
Robert Watson
rwatson at FreeBSD.org
Thu Jul 24 09:43:32 UTC 2014
On Wed, 23 Jul 2014, Tim Kientzle wrote:
>>> I'll take a look at ElectricFence this weekend. Additionally, I have a
>>> netbook somewhere. Once I find it and its power cord, I'll install
>>> FreeBSD/x86 and re-run the same tests on that.
>>
>> Somewhat related to ElectricFence… will ASLR have an adverse effect on
>> debuggers?
>>
>> I googled around and got to this:
>>
>> http://www.outflux.net/blog/archives/2010/07/03/gdb-turns-off-aslr/
>>
>> So I guess we may have to patch gdb (and lldb)?
>
> I suspect the issue here is that debugging often requires multiple runs of a
> program with repeatable behavior between runs.
>
> Consider:
>
> * I run the program under GDB, it crashes at a certain PC address
>
> * I restart the program, set a breakpoint at that PC address
>
> I want to be confident that the PC address where I’m setting the breakpoint
> will have the same meaning between runs.
Non-determism in debugging is a big issue with
diversification/randomisation-based mitigation techniques. There are a number
of aspects to the problem, but the most clear implication is that it should be
possible to create deterministic and reproducible debugging environments in
the local development context. This means, I think, being able to create a
hierarchy of processes in which the randomisation features are by policy
turned off. The contexts in which that property is set are interesting -- do
you want a "no randomisation subshell" in which every program you run has ASLR
turned off? Or do you just want gdb to turn it off? What if, this time
around, you want gdb to have it turned on? And how do you deal with
setuid/setgid/transitioning binaries -- we don't want a regular user to say
"turn off ASLR for this process subtree" and have it prevent ASLR from
protecting a setuid binary from the user.
I think the natural conclusion is that you need multiple means to disable ASLR
that operate at different granularities, and that have different control
mechanisms. Off-hand, I see a few:
(1) A global enable/disable flag that sets the default policy.
(2) A new inherited process property, or perhaps credential property, enabling
ASLR. This can be changed to 'disabled' using a system call -- perhaps
prctl() or similar. If we hit a transitioning binary (e.g., setuid) then
in the same way that we manipulate other properties, we'd reset to the
global default. It would be easy to imagine this being CR_ASLR on the
cr_flags field of the credential. This could be set in various ways by
userspace applications -- by gdb, as a login-shell property, perhaps via
'su' or something simular?
(3) As suggested by Kostik, an ELF note on binaries indicating that the binary
is not ASLR-compatible, which would override (I guess) the global policy
flag and process/credential flag. We could then set this with
NO_ASLR=true in Makefiles, during package creation, etc.
(4) It sounds like a jail-scope policy would also be useful -- I guess this
might actually be the same as (1) in the sense that (1) could be
represented in terms of a jail-scope policy.
I'm not opposed to MAC policy modules being able to manipulate ASLR behaviour,
but I think I'd prefer that the core policy controls (e.g., the above) be
MAC-independent. Part of the reason is that you may want ASLR on very low-end
systems where the additional cost of MAC interposition is more measurable.
How does this interact with features like the Linuxulator? Do we also want
ABI emulations to be able to disable ASLR as ABIs might not support it [well]?
Robert
More information about the freebsd-arch
mailing list