7+ days of dogfood
erik at cederstrand.dk
Mon Feb 11 11:43:22 UTC 2013
Den 11/02/2013 kl. 00.38 skrev Erich Dollansky <erichsfreebsdlist at alogt.com>:
> On Sun, 10 Feb 2013 15:57:01 +0100
> Erik Cederstrand <erik at cederstrand.dk> wrote:
>> And as long as there is no automatic can taster doing quality
>> assurance of the produced cans, then foul cans will go unnoticed
>> until a dog pukes all over the carpet :-)
> Isn't this the idea of HEAD?
It's certainly not the idea of HEAD that everyone should experience the same bugs, compile errors, runtime errors and even have old bugs pop up again repeatedly. It may be the consequence of running HEAD, but certainly not the idea.
>> For this to change, we really need to catch up on years of neglect in
>> e.g. src/tools/regression/. I really applaud the people doing the
>> thankless job of changing this.
> I do not believe that this all can be automated.
I'm not saying that testing is all-or-nothing. OS testing is not easy, and many tests are impractical or expensive if they require real hardware in complicated setups. How do you reliably test an IEEE 802.11s mesh implementation? Or scheduling on huge servers that are too expensive to purchase? I think that is one of the reasons that FreeBSD has not caught up on automated testing and continuous integration. But regression tests are useful even though they don't give 100% code coverage. Currently, you can't even "make test" in src/tools/regression/ and run the tests that are there. Apart from the compile-tests done by the tinderboxes, I'm not aware of any coordinated effort to systematically do runtime or even performance testing of FreeBSD.
More information about the freebsd-current