FreeBSD Compiler Benchmark: gcc-base vs. gcc-ports vs. clang

Poul-Henning Kamp phk at
Fri Mar 11 14:16:41 UTC 2011

In message <4D7943B1.1030604 at>, Martin Matuska writes:

>More information, detailed test results and test configuration are at
>our blog:

Please don't take this personally Martin, but you have triggered
my periodic rant about proper running, evaluation and reporting of

These results are not published at a level of detail that allows
anybody to draw any kind of conclusions from them.

In particular, your use of "overall best" result selection is totally
bogus from a statistical point of view.

At the very least, we need to see standard-deviations on your numbers,
and preferably, when you claim that "X is N% better than Y", you should
also provide the confidence interval on that judgment, "Student's T"
being the canonical test.

The ministat(1) program does both of these things, and is now in
FreeBSD/src, so there is absolutely no excuse for not using it.

In practice this means that you have to run each test at least three
times, to get a standardeviation, and you have to make sure that
your testconditions are as identical as possible.

Therefore, proper benchmarking procedure is something like:

	(boot machine single-user  	// Improves reproducibility)
	(mount md(4)/malloc filesystem	// ditto)
	(newfs test-partition		// ditto)
	for at least 4 iterations:
		run test A
		run test B
		run test C
	Throw first result away for all tests
	Run remaining results through ministat(1)

This was a public service announcement.


PS: Recommended reading:

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

More information about the freebsd-performance mailing list