Timers and timing, was: MySQL Performance 6.0rc1

Poul-Henning Kamp phk at phk.freebsd.dk
Thu Oct 27 15:35:29 PDT 2005


In message <43613541.7030009 at mac.com>, Chuck Swiger writes:

>It doesn't make sense to keep invoking a hardware clock from the kernel for a 
>timer which is updated at a one-second resolution.  Can't we just keep a static 
>time_t called __now in libc for time() to return or stuff into *tloc, which 
>gets updated once in a while (have the scheduler check whether fractional 
>seconds has rolled over every few ticks)?

That is a quite slippery slope to head down...

Calls to time(2) are actually very infrequent (it sort of follows
logically from the resolution) and therefore they are unlikely to
be a performance concern in any decently thought out code.

So adding overhead to the scheduler to improve it is very likely going
to be false economy:  Yes, performance of the time(2) call will improve
but everything else will slow down as a result, even in programs
which never inspect a single timestamp.

No, this is just the wrong way to attack the problem.


What is needed here is for somebody to define how non-perfect we
are willing to allow our timekeeping to be, and _THEN_ we can start
to look at how fact we can make it work.

Here are some questions to start out:

For reference the current codes behaviour is noted in [...]

    *	Does time have to be monotonic between CPUs ?

		Consider:

		gettimeofday(&t1)	// on CPU1
		work(x)			// a couple context switches
		gettimeofday(&t2)	// on CPU2

		Should it be guaranteed that t2 >= t1 ?

		[Yes]

    *   Does time have to be monotonic between different functions ?

		Consider (for instance):

		clock_gettime(&t1)
		work(x)	
		gettimeofday(&t2)

		Should it be guaranteed that t2 >= t1 ?

		For all mixes of time(), gettimeofday() and
		clock_gettime() ?

		Or only for funcion pairs in order of increasing
		resolution ?

		hint: think about how we round a timespec of
		1.000000500 to a timeval.

		[t2 >= t1 for all mixes, provided comparison is
		 done in format with lowest resolution and conversion
		 is done by truncation]

    *	How large variance (jitter) are we willing to accept ?

		Consider:

		gettimeofday(&t1)
		work(x)			/* always constant duration */
		gettimeofday(&t2)
		Twork = timeval_subtract(t2, t1);

		How much jitter can we live with in Twork ?  (ie:
		how much can Twork vary from run to run of the above
		code)

		Is +/- 1 usec required ?

		Is some constant (but low) +/- N usec OK ?

		Is +/- 1msec acceptable ?
		... +/- 10msec acceptable ?
		... +/- 100msec acceptable ?

		Is 1/hz acceptable ?
	
		Also when we don't know which hz the user runs with ?

		Is Twork == zero OK if work(x) takes more than 500nsec ?

		[Jitter of +/- 1 count on timecounting hardware]

    *	Does threads have to return ordered timestamps ?

		Consider:

		CPU1			CPU2

		gettimeofday(t1)
					gettimeofday(t2)
		gettimeofday(t3)

		Do we guarantee that
			 t1 < t2 < t3 
		or
			t1 <= t2 <= t3 AND t1 < t3 
		or
			t1 <= t2 <= t3
		or
			t1 <= t3
		?

		[t1 <= t2 <= t3]

And when you have answered this, remember that your solution needs
to be SMP friendly and work on all architectures.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.


More information about the freebsd-current mailing list