rusage breakdown and cpu limits.

Jeff Roberson jroberson at chesapeake.net
Tue May 29 21:50:17 UTC 2007


On Tue, 29 May 2007, John Baldwin wrote:

> On Tuesday 29 May 2007 05:18:32 pm Jeff Roberson wrote:
>> On Wed, 30 May 2007, Bruce Evans wrote:
>>> I see how rusage accumulation can help for everything _except_ the
>>> runtime and tick counts (i.e., for stuff updated by statclock()).  For
>>> the runtime and tick counts, the possible savings seem to be small and
>>> negative.  calcru() would have to run the accumulation code and the
>>> accumulation code would have to acquire something like sched_lock to
>>> transfer the per-thread data (since the lock for updating that data
>>> is something like sched_lock).  This is has the same locking overheads
>>> and larger non-locking overheads than accumulating the runtime directly
>>> into the rusage at context switch time -- calcru() needs to acquire
>>> something like sched_lock either way.
>>
>> Yes, it will make calcru() more expensive.  However, this should be
>> infrequent relative to context switches.  It's only used for calls to
>> getrusage(), fill_kinfo_proc(), and certain clock_gettime() calls.
>>
>> The thing that will protect mi_switch() is not process global.  I want to
>> keep process global locks out of mi_switch() or we reduce concurrency for
>> multi-threaded applications.
>
> I still think it would be wise to try the simple approach first and only
> engage in further complexity if it is warranted.

I have indirectly shown that this approach will not yield sufficient 
results by decreasing the scope of the sched lock in other ways.  This 
would gate context switches in the same way that a global scheduler lock 
would, except not over as long of a period.

Moving stats to be per-thread really is not very complicated, and very 
likely optimizes the common case even in the absence of increased 
concurrency.  We require fewer indirections for all stats increments in 
this way and also touch fewer cache lines in mi_switch().

Jeff

>
> -- 
> John Baldwin
>


More information about the freebsd-arch mailing list