cvs commit: src/sys/sys time.h src/sys/kern kern_time.c

Robert Watson rwatson at
Sun Nov 27 13:35:34 GMT 2005

On Sun, 27 Nov 2005, Bruce Evans wrote:

> Thus we get a small speedup at a cost of some complexity and large 
> inerface bloat.
> This is partly because there are too many context switches and context 
> switches necessarily use a precise timestamp, and file timestamps are 
> under-represented since they normally use a direct access to 
> time_second.

BTW, simple loopback network testing seems to dramatically confirm that 
the impact of time measurement and context switching is quite significant. 
Especially untimely context switching.  I ran some simple netperf TCP 
tests (w/o -DHISTOGRAM) in late October to look at loopback TCP 
performance, which involves two processes an the netisr thread.  On UP, I 
was quite interested by both the negative performance impact of 
preemption, and the performance impact of switching to the TSC for 
in-kernel time stamping for context switches.  The kernel in these tests 
is modified to allow immediate preemption of the netisr thread to be 
disabled using a sysctl.  Results are in Mbps.  Note that even once a 
number of poorly timed context switches due to undesirable preemption are 
disabled, we still see a 4.7% performance improvement from lowering the 
cost of the time stamp mechanism in kernel, which I presume (but have not 
measured) to be due to the continued impact on context switches.

The problem with preemption is really a fairly fundamental architectural 
one: the netisr model was designed with the notion that the netisr would 
start running "at a good time".  With ithreads waking up the netisr, this 
generally does happen, since ithreads run in precedence to the netisr on 
UP.  However, when a normal user thread in kernel wakes up the netisr due 
to sending on the loopback interface, the netisr immediately preempts, 
resulting in a number of "worst case" behaviors, such as immediately 
switching back when trying to acquire locks held by the sending thread. 
On SMP the interactions are quite different, and I am still investigating 
the effects there (disabling immediate preemption in this case on SMP 
actually lowers performance, as the netisr begins running on another CPU 
and then presumably contends the same locks, as well as "migrating" all 
the mbufs from one CPU to another -- I'll know more in a couple of weeks 
when I have time to fix schedgraph for SMP).  It's not that preemption is 
necessarily bad, but it interacts very poorly with an assumption in the 
loopback code that assumes that a wakeup now won't result in work until a 
bit later.  I'm not sure what the right approach to fixing these problems 
is -- we either need to restore (one way or the other) scheduling 
assumptions of the code, or change the code to reflect new scheduling 
assumptions.  Regardless of this issue, the overall impact of time keeping 
on context switches is non-trivial.

x preempt
+ preempt.tsc
* nopreempt
% nopreempt.tsc
|     xx                      +                              *          %% |
|     xx                      +                              **         %% |
|     xx                     ++                              **         %% |
|x    xx                    +++                       * **  ****   %%%  %%%|
|   |_A_|                    |A                          |__AM_|     |_AM_||
     N           Min           Max        Median           Avg        Stddev
x  12        2123.5       2194.31       2186.03     2181.4983     19.144156
+  12       2444.03       2468.44       2463.22     2460.1725     6.5242305
Difference at 95.0% confidence
         278.674 +/- 12.1092
         12.7744% +/- 0.555084%
         (Student's t, pooled s = 14.3015)
*  12       2750.12       2845.31       2829.98     2816.2325     31.188601
Difference at 95.0% confidence
         634.734 +/- 21.9101
         29.0962% +/- 1.00436%
         (Student's t, pooled s = 25.8769)
%  12       2902.27        2979.3       2954.93     2949.0792      25.48312
Difference at 95.0% confidence
         767.581 +/- 19.0828
         35.1859% +/- 0.874755%
         (Student's t, pooled s = 22.5376)

More information about the cvs-all mailing list