[RFC] ASLR Whitepaper and Candidate Final Patch

Shawn Webb lattera at gmail.com
Thu Jul 24 17:57:09 UTC 2014


On Jul 22, 2014 08:45 PM -0400, Shawn Webb wrote:
> On Jul 23, 2014 12:28 AM +0100, Robert Watson wrote:
> > On Sun, 20 Jul 2014, Shawn Webb wrote:
> > 
> > >> - It is yet undetermined what the performance effect will be, and it is not 
> > >> clear (but seems likely from past measurements) if there will be a 
> > >> performance hit even when ASLR is off. -Apparently there are applications 
> > >> that will segfault (?).
> > >
> > > So I have an old Dell Latitude E6500 that I bought at Defcon a year or
> > > so ago that I'm doing testing on. Even though it's quite an underpowered
> > > laptop, I'm running ZFS on it for BE support (in case one of our changes
> > > kills it). I'll run unixbench on it a few times to benchmark the ASLR
> > > patch. I'll test these three scenarios:
> > >    1) ASLR compiled in and enabled;
> > >    2) ASLR compiled in and disabled;
> > >    3) ASLR compiled out (GENERIC kernel).
> > >
> > > In each of these three scenarios, I'll have the kernel debugging features 
> > > (WITNESS, INVARIANTS, etc.) turned off to better simulate a production 
> > > system and to remove just one more variable in the tests.
> > >
> > > I'll run unixbench ten times under each scenario and I'll compute averages.
> > >
> > > Since this is an older laptop (and it's running ZFS), these tests will take 
> > > a couple days. I'll have an answer for you soon.
> > 
> > Hi Shawn:
> > 
> > Great news that this work is coming to fruition -- ASLR is long overdue.
> > 
> > Are you having any luck with performance measurements?  Unixbench seems like a 
> > good starting point, but I wonder if it would be useful to look, in 
> > particular, at memory-mapping intensive workloads that might be affected as a 
> > result of changes in kernel VM data-structure use, or greater fragmentation of 
> > the address space.  I'm not sure I have a specific application here in mind -- 
> > in the past I might have pointed out tools such as ElectricFence that tend to 
> > increase fragmentation themselves.
> 
> The unixbench tests on that laptop have finished. However, I've been
> fighting a pesky migraine these last couple days, so I haven't had the
> opportunity to aggregate the results into a nice little spreadsheet. I'm
> hoping to finish it up by the end of the week.
> 
> I'll take a look at ElectricFence this weekend. Additionally, I have a
> netbook somewhere. Once I find it and its power cord, I'll install
> FreeBSD/x86 and re-run the same tests on that.
> 
> > 
> > Also, could you say a little more about the effects that the change might have 
> > on transparent superpage use -- other than suitable alignment of large 
> > mappings, it's not clear to me what effect it might have.
> 
> Since we're just modifying the hint passed to the underlying VM system,
> superpage support works as it should with ASLR enabled. The VM system
> will modify the hint in order to be able to use superpages. In those
> cases, we might lose a little bit of entropy. However, due to superpages
> (on amd64, at least) requring 2MB alignment, you'd lose some entropy no
> matter how ASLR was implemented--at the end of the day, you need that
> alignment for superpages to work.
> 
> > 
> > I wonder if some equipment in the FreeBSD Netperf cluster might be used to 
> > help with performance characterisation -- in particular, very recent high-end 
> > server hardware, and also, lower-end embedded-style systems that have markedly 
> > different virtual-memory implementations in hardware and software.  Often 
> > those two classes of systems see markedly different performance-change 
> > characteristics as a result of greater cache-centrism and instruction-level 
> > parallelism in the higher-end designs that can mask increases in instruction 
> > count.
> 
> Any additional testing would be very much welcome. Our ASLR
> implementation misbehaves on ARM, so testing on ARM-based embedded
> devices is pretty limited. My next goal is to figure out why it bugs out
> on ARM. Essentially, when a child process exits/dies and the parent
> process gets sent SIGCHLD, the parent process' pc register somehow gets
> set to 0xc0000000 and segfaults. Here's a screenshot of the process:
> https://twitter.com/lattera/status/490529645997998080
> 
> FreeBSD 11-CURRENT hasn't been stable at all on sparc64, even without
> the ASLR patches. I have an SunFire 280R box that I've attempted to test
> ASLR our on, but I couldn't get a stable enough installation of vanilla
> FreeBSD to work long enough to recompile world/kernel. And generating an
> installation ISO from my amd64 box doesn't work as the VTOC8 bootloader
> isn't recognized by the BIOS (not sure if that's what it's called in
> sparc land).
> 
> > 
> > I think someone has already commented that Peter Holm's help might be 
> > enlisted; you have have seen his 'stress2' suite, which could help with 
> > stability testing.
> 
> I'll take a look at that, too. Thanks a lot for your suggestions and
> feedback.

The unixbench results are in. The overall scores are below.

ASLR Disabled: 456.33
ASLR Enabled:  357.05
No ASLR:       474.03

I've uploaded the raw results to
http://0xfeedface.org/~shawn/aslr/2014-07-24_benchmark.tar.gz

Take these results with a grain of salt, given that some of unixbench's
test are filesystem-related and I'm running ZFS on an old laptop with
little RAM. It does show that there is a performance impact when ASLR is
enabled.

Within the last day, I have made some changes to clean up the code
backing our ASLR implementation that would enhance the performance when
ASLR is enabled. I'll re-run the tests when ASLR is enabled starting
tonight and I'll have a new set of results tomorrow.

I'll also give ElectricFence a try. Those results will come later.

Thanks,

Shawn
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-arch/attachments/20140724/89f4854d/attachment.sig>


More information about the freebsd-arch mailing list