How does the stack's guard page work on amd64?
Alan Somers
asomers at freebsd.org
Fri Apr 2 05:00:11 UTC 2021
On Thu, Apr 1, 2021 at 12:59 AM Konstantin Belousov <kostikbel at gmail.com>
wrote:
> On Wed, Mar 31, 2021 at 10:06:30PM -0600, Alan Somers wrote:
> > On Wed, Mar 31, 2021 at 5:21 AM Konstantin Belousov <kostikbel at gmail.com
> >
> > wrote:
> >
> > > On Tue, Mar 30, 2021 at 08:28:09PM -0600, Alan Somers wrote:
> > > > On Tue, Mar 30, 2021 at 3:35 AM Konstantin Belousov <
> kostikbel at gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > On Mon, Mar 29, 2021 at 11:06:36PM -0600, Alan Somers wrote:
> > > > > > Rust tries to detect stack overflow and handles it differently
> than
> > > other
> > > > > > segfaults, but it's currently broken on FreeBSD/amd64. I've got
> a
> > > patch
> > > > > > that fixes the problem, but I would like someone to confirm my
> > > reasoning.
> > > > > >
> > > > > > It seems like FreeBSD's main thread stacks include a guard page
> at
> > > the
> > > > > > bottom. However, when Rust tries to create its own guard page
> (by
> > > > > > re-mmap()ping and mprotect()ing it), it seems like FreeBSD's
> guard
> > > page
> > > > > > automatically moves up into the un-remapped region. At least,
> > > that's how
> > > > > > it behaves, based on the addresses that segfault. Is that
> correct?
> > > > > Show the facts. For instance, procstat -v (and a note which
> > > > > mapping was established by runtime for the 'guard') would tell the
> > > whole
> > > > > story.
> > > > >
> > > > > My guess would be that procctl(PROC_STACKGAP_CTL,
> > > &PROC_STACKGAP_DISABLE)
> > > > > would be enough. Cannot tell without specific data.
> > > > >
> > > > > >
> > > > > > For other threads, Rust doesn't try to remap the guard page, it
> just
> > > > > relies
> > > > > > on the guard page created by libthr in _thr_stack_alloc.
> > > > > >
> > > > > > Finally, what changed in between FreeBSD 10.3 and 11.4? Rust's
> stack
> > > > > > overflow detection worked in 10.3.
> > > > > >
> > > > > > -Alan
> > > > > > _______________________________________________
> > > > > > freebsd-hackers at freebsd.org mailing list
> > > > > > https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> > > > > > To unsubscribe, send any mail to "
> > > > > freebsd-hackers-unsubscribe at freebsd.org"
> > > > >
> > > >
> > > > Here is the relevant portion of procstat -v for a test program built
> with
> > > > the buggy rustc:
> > > > 651 0x801554000 0x80155d000 rw- 0 17 3 0
> -----
> > > df
> > > > 651 0x801600000 0x801e00000 rw- 30 30 1 0
> -----
> > > df
> > > > 651 0x7fffdfffd000 0x7fffdfffe000 --- 0 0 0 0
> -----
> > > --
> > > > 651 0x7fffdfffe000 0x7fffdffff000 --- 0 0 0 0
> -----
> > > --
> > > > <--- What Rustc thinks is the guard page
> > > > 651 0x7fffdffff000 0x7fffe0000000 --- 0 0 0 0
> -----
> > > --
> > > > <--- Where did this come from?
> > > This is the stack grow area, occupied by 'elastic' guard entry.
> > > It serves two purposes:
> > > 1. it keeps the space, preventing other non-fixed mappings from
> selecting
> > > the grow area for mapping.
> > > 2. it prevents stack from growing down to the next mapping below it,
> > > preventing issues like StackClash.
> > >
> > > See mmap(2) esp. MAP_STACK part of it.
> > >
> >
> > I saw that. And I even saw where libthr uses MAP_STACK when creating new
> > threads. However, this program is single-threaded. Where does the stack
> > get created for a process's main thread? I couldn't find that.
> In kernel during execve(2). Specifically, sys/kern/kern_exec.c,
> exec_new_vmspace(), vm_map_stack() call.
>
> >
> >
> > >
> > > > 651 0x7fffe0000000 0x7fffe001e000 rw- 30 30 1 0
> ---D-
> > > df
> > > > 651 0x7fffe001e000 0x7fffe003e000 rw- 32 32 1 0
> ---D-
> > > df
> > > >
> > > > Rustc tries to create that guard page by finding the base address of
> the
> > > > stack, reallocating one page, then mprotect()ing it, like this:
> > > >
> > >
> mmap(0x7fffdfffe000,0x1000,0x3<PROT_READ|PROT_WRITE>,0x1012<MAP_PRIVATE|MAP_FIXED|MAP_ANON>,0xffffffff,0)
> > > > mprotect(0x7fffdfffe000,0x1000,0<PROT_NONE>)
> > > >
> > > > If I patch rustc to not attempt to allocate a guard page, then its
> memory
> > > > map looks like this. Notice that 0x7fffdffff000 is now accessible
> > > It is accessible because stack grown down into this address.
> > >
> > > > 662 0x801531000 0x80155b000 rw- 3 17 3 0
> -----
> > > df
> > > > 662 0x801600000 0x801e00000 rw- 30 30 1 0
> -----
> > > df
> > > > 662 0x7fffdfffd000 0x7fffdfffe000 --- 0 0 0 0
> -----
> > > --
> > > > 662 0x7fffdfffe000 0x7fffdffff000 --- 0 0 0 0
> -----
> > > --
> > > > 662 0x7fffdffff000 0x7fffe001e000 rw- 31 31 1 0
> ---D-
> > > df
> > > > 662 0x7fffe001e000 0x7fffe003e000 rw- 32 32 1 0
> ---D-
> > > df
> > > >
> > > > So the real question is, why does 0x7fffdffff000 become protected
> when
> > > > rustc protects 0x7fffdfffe000 ?
> > > See above.
> > >
> > > As I said in earlier response, if you want fully shrinkable stack
> guard,
> > > set procctl(PROC_STACKGAP_CTL, &PROC_STACKGAP_DISABLE) during runtime
> > > initialization.
> > >
> > > Or better, do not create custom guard page at all, relying on system
> guard.
> > >
> >
> > That's what my patch does. But I've only tested it on amd64, and I don't
> > have access to alternative architectures. Does every architecture
> create a
> > stack guard this way?
>
> Yes.
>
Thanks for the explanation. That led me to what has changed since 10.3:
r320317 . I've opened a PR with rustc to fix the bug. Thanks for all
your help.
-Alan
More information about the freebsd-hackers
mailing list