cvs commit: src/sys/kern init_main.c kern_malloc.c md5c.c subr_autoconf.c subr_mbuf.c subr_prf.c tty_subr.c vfs_cluster.c vfs_subr.c

Peter Jeremy PeterJeremy at
Fri Jul 25 14:21:46 PDT 2003

On Wed, Jul 23, 2003 at 01:28:24AM +0200, Poul-Henning Kamp wrote:
>Please remember that the problem at hand is getting -Werror back
>on the kernel so we can catch issues like the warning in umtx.

Why is -Werror such a holy grail?  The warnings are still there -
developers should be able to use script(1) or output redirection and
grep(1) to find them.  Not having -Werror has the benefit that you get
to see all the warnings and make doesn't just die at the first error.

I agree that warnings should be minimised so you can easily see new
warnings but in some cases, compiler warnings are wrong or require
code obfuscation to quieten.

At least in the past, gcc could not do sufficient data-flow analysis
to correctly determine uninitialised variables when the variable in
question was only used within conditionally executed code - and it
erred on the side of caution.  Removing this warning means adding an
unnecessary initialisation - increasing code size and reducing
performance (admittedly trivially).

Likewise, is "(type *)(intptr_t)foo" any clearer than "(type *)foo"
to remove "const" or "volatile".  It definitely increases the "code
complexity" (as per McCabe or similar).

>My experiments have shown that if we had just raised the limit high
>enough to inline everything that we have marked as inline, the
>GENERIC kernel text segment would have grown by more than 100 k.
>The inlines I have removed today have all been inlines which GCC
>has previously ignored and which added significant code segment
>size, typically 2k+.
>You can see some of my raw data here: http://phk/misc/inline.txt

These all discuss static code size.  Aside from the regular "what can
we remove from the kernel so it fits on the boot-floppy again"
threads, static code size is irrelevant.  The critical issue is kernel
performance - which on high-end processors is more dependent on how
much code gets moved from RAM into cache and secondarily on the amount
of code executed.  "More code" does not translate to "more code
executed" - quite a number of optimisation techniques demonstrate the
opposite.  The objective is to minimise the length of the most common
code paths and either eliminate branches or ensure that the CPU
predicts them correctly - if this means that a rarely executed code
path bloats, this is still a win.


More information about the cvs-all mailing list