Quation about HZ kernel option

Matthew Dillon dillon at apollo.backplane.com
Thu Oct 4 12:39:48 PDT 2007


:Nuts! Everybody has his own opinion on this matter.
:Any idea how to actually build syntetic but close to real 
:benchmark for  this?

    It is literally impossible to write a benchmark to test this, because
    the effects you are measuring are primarily scheduling effects related
    to the scheduling algorithm and not so much the time quantum.

    One can demonstrate that ultra low values of HZ are bad, and ultra high
    values of HZ are also bad, but everything in the middle is subject to
    so much 'noise', to the type of test, the scheduler algorithm, and so
    on and so forth that it is just impossible.

    This is probably why there is so much argument over the issue.

:For example:
:Usual web server does:
:1) forks
:2) reads a bunch of small files from disk for some time
:3) forks some cgi scripts
:4) dies
:
:If i write a test in C doing somthing like this and run
:very many of then is parallel for, say, 1 hour and then
:count how many interation have been done with HZ=100 and
:with HZ=1000 will it be a good test for this?
:
:--
:Regards
:Artem

    Well, the vast majority of web pages are served in a microsecond
    timeframe and clearly not subject to scheduler quantum because the
    web server almost immediately blocks.  Literally 100 uS or less and
    the web server's work is done.

    You can ktrace a web server to see this in action.  Serving pages is
    usually either very fast or the process winds up blocking on I/O (again
    not subject to the scheduler quantum).

    CGIs and applets are another story because they tend to be more 
    cpu-intensive, but I would argue that the scheduler algorithm will have
    a much larger effect on performance and interactivity then the time
    quantum.  You only have so much cpu to play with -- a faster HZ will
    not give you more, so if your system is cpu bound it all comes down
    to the scheduler selecting which processes it feels are the most
    important to run at any given moment.

    One might think that quickly switching between processes is a good idea
    but there are plenty of workloads where it can have catastrophic results,
    such as when a X client is shoving a lot of data to the X server.  In
    that case fast switching is bad because efficient client/server
    interactions depend very heavily on the client being able to build up
    a large buffer of operations for the server to execute in bulk.  X
    becomes wildly inefficient with fast switching... It can wind up going
    2x, 4x, even 8x slower.

    Generally speaking, any pipelined workload suffers with fast switching
    whereas non-pipelined workloads tend to benefit.  Operations which can
    complete in a short period of time anyway (say 10ms) suffer if they are
    switched out, operations which take longer do not.  One of the biggest
    problems is that applications tend to operate in absolutes (a different
    absolute depending on the application and the situation), whereas the
    scheduler has to make decisions based on counting quantums.

					-Matt
					Matthew Dillon 
					<dillon at backplane.com>


More information about the freebsd-stable mailing list