Timers and timing, was: MySQL Performance 6.0rc1
Chuck Swiger
cswiger at mac.com
Thu Oct 27 13:14:55 PDT 2005
Yuriy N. Shkandybin wrote:
>>> Check gettimeofday syscall, it follows every I/O syscall, I think
>>> our gettimeofday is tooooooo expensive, if we can directly get time from
>>> memory, the performance will be improved further.
>
> It's true:
> run next on same PC -- freebsd and linux and compare
[ ...snippet of timing code deleted, see attachment instead... :-) ]
FreeBSD 4.11-STABLE i386
null function: 0.01069
getpid(): 0.51729
time(): 3.51727
gettimeofday(): 3.48715
FreeBSD 5.4-STABLE i386
null function: 0.01278
getpid(): 0.51329
time(): 2.54771
gettimeofday(): 2.54982
Linux 2.6.5 i686
null function: 0.01858
getpid(): 0.01979
time(): 0.44811
gettimeofday(): 0.55776
Darwin 8.2.0 Power Macintosh
null function: 0.01889
getpid(): 0.03590
time(): 0.20913
gettimeofday(): 0.17278
SunOS 5.8 sun4u
null function: 0.05051
getpid(): 1.29846
time(): 1.26596
gettimeofday(): 0.29507
[ These are representative results (in seconds); running the test three times
per host shows the null function time value is stable to two digits, or three
on some hosts; the other values seem to vary by less than 10%. ]
The Intel boxes are all Intel P3, between 700MHz and 1Ghz, the Sun is a
dual-proc E450 @ 450MHz, and the other is a Mac Mini @ 1.3Ghz, I think.
Real numbers are are well and good, but I don't want to start yet another
thread about microbenchmarks or statistics.
People who are doing timers are generally looking for one of two things, a
cron-like system which schedules periodic or one-shot events over time
intervals of minutes, hours, days, etc (for which time() and alarm() work
fine), or they want to deal with high-resolution time in order to see how long
a call like a SQL query takes or update the display every 10ms, 16.67ms, and so
forth to do realtime graphics (via gettimeofday() and usleep()/nanosleep()).
It's clear that the Linux getpid() syscall is merely keeping the pid around
locally, rather than doing a full context switch, and Darwin seems to be doing
similarly, only with some locking or tracing overhead.
It doesn't make sense to keep invoking a hardware clock from the kernel for a
timer which is updated at a one-second resolution. Can't we just keep a static
time_t called __now in libc for time() to return or stuff into *tloc, which
gets updated once in a while (have the scheduler check whether fractional
seconds has rolled over every few ticks)?
--
-Chuck
-------------- next part --------------
#include <errno.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
typedef void (*null_t) (void);
void null_function(void) {}
void gettimeofday_(void) {
struct timeval unused;
gettimeofday(&unused, NULL);
}
void time_(void) {
time_t now = time(&now);
}
void getpid_(void) {
pid_t p = getpid();
}
timeit(null_t f, char *name)
{
struct timeval start;
struct timeval stop;
struct timeval tmp;
int unused;
register unsigned i;
double diff_time;
static double null_time = 0.0;
if (null_time == 0.0) {
/* null function call */
gettimeofday(&start, NULL);
for(i=0; i<1000000; i++) {
null_function();
}
gettimeofday(&stop, NULL);
null_time = (double) (stop.tv_sec - start.tv_sec) * 1.0 + \
(stop.tv_usec - start.tv_usec) / 1000000.0;
printf("%20s: %0.5f\n", "null function", null_time);
}
gettimeofday(&start, NULL);
for(i=0; i<1000000; i++) {
f();
}
gettimeofday(&stop, NULL);
diff_time = (double) (stop.tv_sec - start.tv_sec) * 1.0 + \
(stop.tv_usec - start.tv_usec) / 1000000.0;
printf("%20s: %0.5f\n", name, diff_time - null_time);
}
main() {
timeit(getpid_, "getpid()");
timeit(time_, "time()");
timeit(gettimeofday_, "gettimeofday()");
}
More information about the freebsd-current
mailing list