%cpu in system - squid performance in FreeBSD 5.3

Robert Watson rwatson at freebsd.org
Sat Dec 25 23:14:16 PST 2004

On Sun, 26 Dec 2004, João Carlos Mendes Luís wrote:

>      It must not be this.  Squid is mostly a single process system, with 
> scheduling based on descriptors and select/poll.  Recent versions added 
> some parallelism in other processes, but just for file reading/writing 
> (diskd) and regular expression processing for ACLs.  Even DNS, which 
> previously ran on blocking I/O in secondary processes now run internally 
> in the select/poll scheduler.

Thanks for this information.

> > I might start by turning on kernel profiling and doing a profile dump
> > under load.  Be aware that turning on profiling uses up a lot of CPU
> > itself, so will reduce the capacity of the system.  There's probably
> > documentation elsewhere, but the process I use to set up profiling is
> > here:
>      I did not make any tests on this, but I would expect profiling to
> fail, since every step of the scheduler is very small, and deals with
> the smallest I/O available at that time. 

This is kernel profiling, not application profiling, and would hopefully
give us information on where the kernel was spending most of its time,
since in the environment in question system time appears to be dominant. 
If SMP in theory makes little difference to Squid performance, then
switching to a UP kernel may well make kernel profiling more reliable and
hence more useful in tracking systemn time.

>      Indeed, based on the original report I would search for some
> optimization on descriptor searching in poll or select, whichever squid
> has chosen to use on FreeBSD (probably select, looking at the top
> output).  This is one of the crucial points on squid performance.  The
> other one is disk access, for sure, but the experimente describe would
> not change disk access patterns, would it? 

The reporter described a very high percentage of system time -- time spent
blocked on disk I/O isn't billed to system time; if spending lots of time
waiting on disk I/O for a single process, you'd see idle time rather than
system time predominating, I believe.

> > As a final question: other than CPU consumption, do you have a reliable
> > way to measure how efficiently the system is operating -- in particular,
> > how fast it is able to serve data?  Having some sort of metric for
> > performance can be quite useful in optimizing, as it can tell us whether
>      One thing I fail to measure in FreeBSD is the reason for delays in
> disk access times.  How can I prove that the delay is on disk, and
> determine how to optimize it?  systat -v is very useful, but does not
> give me all answers. 

I'm not sure there are useful summary tools at a system-wide level for
this, but it is possible to use KTR(9) to trace the associated scheduler
and disk events.  In particular, I recently added high level tracing of
g_down and g_up GEOM events to KTR.  Jeff Roberson is about to commit a
scheduler visualization tool that interprets KTR events relating to the
scheduler that may also be useful.  It would certainly be extremely useful
to have a tool for normal system operation that could be pointed at a
process to say "show me the percent of time spent on various wait channels
for pid 50".  ktrace(1) has the ability to track context switches but
appears not to provide enough information to figure out why the context
switch took place currently.  I'll investigate this in the next couple of
days -- the trick is to gather this sort of statistic without too much
additional overhead.  If that's not easily possible, then simply
post-processing KTR may be the right approach.

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert at fledge.watson.org      Principal Research Scientist, McAfee Research

More information about the freebsd-performance mailing list