optimization levels for 6-STABLE build{kernel,world}
Gary Kline
kline at sage.thought.org
Wed Sep 13 21:42:48 PDT 2006
On Wed, Sep 13, 2006 at 05:25:35PM -0700, Chuck Swiger wrote:
> On Sep 13, 2006, at 4:49 PM, Gary Kline wrote:
> > A couple of things. Will having gcc unroll loops have any
> > negative consequences?
>
> Yes, it certainly can have negative consequences. The primary intent
> of using that option is to change a loop from executing the test or
> control block for each iteration that the body is executed, to
> executing the loop body several times and checking the test or
> control block less often. The issue is that there is often
> additional setup or post-loop fixups to ensure that the loop body
> actually is executed the right number of times, which makes the
> generated binary code much larger.
>
> This can mean that the loop no longer fits within the L1 instruction
> cache, which will usually result in the program going slower, rather
> than faster. Using the option will always increase the size of
> compiled executables.
>
> > (I can't imagine how:: but better
> > informed than to have something crash inexplicability.)
> > With 6.X safe at -O2 and with -funroll-loops, that should be
> > a slight gain, right?
>
> -funroll-loops is as likely to decrease performance for a particular
> program as it is to help.
Isn't the compiler intelligent enough to have a reasonable
limit, N, of the loops it will unroll to ensure a faster runtime?
Something much less than 1000, say; possibly less than 100.
At least, if the initializiation and end-loop code *plus* the
loop code itself were too large for the cache, my thought is that
gcc would back out. Imay be giving RMS too much credit; but
if memory serves, thed compiler was GNU's first project. And
Stallman was into GOFAI, &c, for better/worse.[1] Anyway, for now
I'll comment out the unroll-loops arg.
>
> One particular caveat with using that option is that the increase in
> program size apparently causes the initial bootloader code to no
> longer fit within a single sector, making the system unbootable.
>
> > [Dumb] questions:: first, what does the compiler do with
> > "-fno-strict-aliasing"?
>
> It prevents the compiler from generating buggy output from source
> code which uses type-punning.
>
> http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
>
> A safe optimizer must assume that an arbitrary assignment via a
> pointer dereference can change any value in memory, which means that
> you have to spill and reload any data being cached in CPU registers
> around the use of the pointer, except for const's, variables declared
> as "register", and possibly function arguments being passed via
> registers and not on the stack (cf "register windows" on the SPARC
> hardware, or HP/PA's calling conventions).
Well, I'd added the no-strict-aliasing flag to make.conf!
Pointers give me indigestion ... even after all these years.
Thanks for your insights. And the URL.
gary
[1]. Seems to me that "good old-fashioned AI" techniques would work in
something like a compiler where you probblyhave a good idea of
most heuristics. -gk
>
> --
> -Chuck
>
>
>
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
--
Gary Kline kline at thought.org www.thought.org Public service Unix
More information about the freebsd-stable
mailing list