How can this 'top' command output make sense? Load over 7 and total CPU use ~5%

Glen Barber glen.j.barber at gmail.com
Sun May 24 09:47:14 UTC 2009


On Sun, May 24, 2009 at 5:07 AM, Matthew Seaman wrote:
>>
>> I thought, if it was a dual-core for example, a load average of 1.00
>> would indicate 50% CPU utilization overall (1 process using only 1
>> core)[1].  2.00 on a dual-core would be 100%, 3.00 on a dual-core
>> would be 100% utilization, and always 1 process in the wait queue, and
>> so on.
>
> It seems both ways have been used in different OSes, which is confusing.
> A quick test of a single threaded process that will spin one CPU on a
> multi-core FreeBSD box shows the value is /not/ scaled by the number of
> cores.
>

Meaning a load average of 1.00 on a single-core versus dual-core means
the same thing?  I can't tell if you said what I said (or meant) with
different wording, or if you said the opposite.  :-)

> Which means that the LA the OP was talking about is actually a lot less
> alarming
> than it originally appears.  It's clear from the top output that his machine
> has at least 8 cores, so a LA of 7 is really not very heavily loaded.
>

So in this situation, he has 1 core idle all of the time, correct?

>>
>> Does this affect the load average though?  My understanding was that
>> if the CPU cannot immediately process data, the data gets put into the
>> wait queue until L2 Cache (then RAM, etc, etc) returns the data to be
>> processed.
>
> Yes it does: when a process is on the CPU and blocked waiting for IO
> it does not necessarily yield the CPU to another process.  It depends on
> timescales -- obviously if the CPU will have to wait milliseconds for data
> it makes no sense to block other processes.  Waiting a few microseconds is
> a different matter though: it might take that long to load up L2/L3 cache
> with that processes' working data, so yielding the CPU for that sort of
> delay
> would mean the process never got run, which is counter productive...  It
> helps if the working set is already in the L3 cache -- so having the correct
> amount[*] of cache RAM available is an important design criterion.

Makes sense.

-- 
Glen Barber


More information about the freebsd-questions mailing list