Regression testing (was Re: Performance issue)

Bakul Shah bakul at
Tue May 10 08:18:49 PDT 2005

This thread makes me wonder if there is value in runing
performance tests on a regular basis.  This would give an
early warning of any peformance loss and can be a useful
forensic tool (one can pinpoint when some performance curve
changed discontinuously even though at the time of change it
may be too small to be noticed).  Over a period of time
one can gain a view of how the performance evolves.

This would not be a single metric but a set of low and high
level measures: such as syscall overhead, interrupt overhead,
specific h/w devices, disk and fs performance for various
filesystems and file sizes, networking data and pkt
throughput, routing performance, VM, other subsystems, effect
of SMP, various threading libraries, scaling with number of
users/programs/cpus/memory, typical applications under normal
and stressed loads, compile time for the system and kernel
etc. etc. etc.

The setup would allow for easy addition of new benchmarks
(the only way anything like this can be bootstrapped).  Of
course, one would need to record disk/processor/memory speed
and capacities + kernel config options, system build tools
and their options to interpret the results as best as
possible.  For the results to be useful the setup has to
remain as stable as possible for a long time.

[While I am dreaming...] A follow on project would be to
create visualization tools -- mainly graphing and comparing
graphs.  It would be neat if one can click on a performance
graph to zoom in or see commits made during some selected

Such a detailed look, combined with profiling can help people
focus on specific hotspots & feel good about any improvements
they are making.  This can be a great way to rope in new

More information about the freebsd-performance mailing list