Automated performance testing
Jim C. Nasby
decibel at decibel.org
Mon Jan 31 20:59:59 PST 2005
On Mon, Jan 31, 2005 at 03:24:39PM +0000, Robert Watson wrote:
>
> On Sun, 30 Jan 2005, Jim C. Nasby wrote:
>
> > With all the discussion of performance testing between 4.11, 5.3, and
> > Linux, would it be useful to make performance testing part of the
> > automated testing that already occurs (via tinderbox, iirc). Doing so
> > might make it easier to detect performance impacting changes, as well as
> > making performance testing easier in general.
>
> Yes, it would be quite valuable. I've been hoping to set up something
> like this for a while, but have never found the opportunity. I have been
> tracking the long term behavior of MySQL performance as part of the
> netperf work, but because testing is fairly hardware and time consuming,
> the polling intervals are uneven, and not quite close enough to nail down
> culprits. I'd really like to see a small and fairly well-defined set of
> tests run every couple of days so we can show long term graphs, and catch
> regressions quickly. Unfortunately, this is a bit harder than
> tinder-boxing, because it involves swapping out whole system
> configurations, recovering from the inevitable failure modes, etc, which
> proves to be the usual sticking point in implementing this. However, I'd
> love to see someone work on it :-).
FWIW, I'd suggest something less complicated than a database for
performance testing. For starters, there's no way to isolate what part
of the OS (if any) is responsible for a performance change. Databases
also continually improve their own performance, so it's very much a
moving standard.
--
Jim C. Nasby, Database Consultant decibel at decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"
More information about the freebsd-performance
mailing list