cvs commit: src/usr.sbin/powerd powerd.c

Kevin Oberman oberman at es.net
Tue Aug 30 23:05:21 GMT 2005


> Date: Tue, 30 Aug 2005 10:52:31 +0200
> From: Bruno Ducrot <bruno at poupinou.org>
> Sender: owner-cvs-all at freebsd.org
> 
> On Mon, Aug 29, 2005 at 10:08:25PM -0700, Nate Lawson wrote:
> > Bruno Ducrot wrote:
> > >On Sun, Aug 28, 2005 at 10:36:38PM -0700, Nate Lawson wrote:
> > >
> > >>Another mitigating factor is a patch I hope to commit soon that removes 
> > >>levels that aren't useful.  The general idea is the same as a recent 
> > >>email from Tijl Coosemans but my approach is different.
> > >
> > >
> > >I'm pretty sure it's incorrect to add p4tcc and acpi_throttle for power
> > >saving purpose.  I plan to add some flags in order to use only relevant
> > >frequencies to this end, but IMHO that should be done at low-level
> > >drivers.  On the other hand, it is usefull to keep the existing sysctl
> > >freqs, but for cooling purpose only.
> > 
> > I think throttling, whether via p4tcc or acpi_throttle, is a useful 
> > addition to absolute frequency control (i.e. est or powernow).  With 
> > appropriate tuning, as I hope the patch I committed provides, the 
> > additional levels should be helpful.
> 
> Apart on older processors, I don't see the usefullness for power saving
> purpose.  The problem is that when the processor is in stop grant state
> in the duty cycle, it will consume more power than when it is in sleep
> or deep sleep states (or deeper sleep state for some).
> If the processor is idle, you will have nearly like 100% of time spend
> in sleep state (for laptops) or stop grant state (for desktop), or even
> better if the system support C3 etc.
> 
> But if you have a duty cycle of (say) 87.5% due to the idleness of the
> system (and the result of powerd), then the processor will be put
> for 87.5% of time in stop grant state which consume more power
> than sleep state.

I'm trying to move this discussion to acpi@ where it really should be
archived. I don't see that it is really relevant to cvs-src or the like.
This is absolutely correct but not always relevant.

I am attempting to do some of the testing that the SOC proposal covered
and I am also coming to the conclusion that, in most cases, TCC and
throttling are not too useful. But I know of cases when they are
absolutely effective.

Three cases: 
1. CPU idle - No measurable difference detected to this point by TCC. I
think there is a small difference, but it's going to be hard to measure.

2. CPU at a constant, moderate load (mp3 playback) - TCC is
detrimental. You use less (often much less) power at an unthrottled clock
speed with the system at 10 or 20% CPU than when TCC is used and the
system is running throttled, but at 70 or 80% CPU.

3. CPU compute bound - TCC can reduce power consumption (at a rather
steep cost in performance. This is not generally useful EXCEPT when
needing keep the system running on battery for an extended time while CPU
bound (e.g. buildworld and building openoffice.org). Here you can keep
the battery alive for a much longer time by use of TCC than without. I
use this when building openoffice.org since my laptop needs to move from
work-location to work-location to home in the course of most builds.

I'm limited to testing on a single platform with ICHSS and TCC. I hope
to get the tests into scripts that others can run on different platforms
(e.g. EST and AMD) to get more comprehensive results, but that will take
a bit of time. I have only the CPU bound script written at this time,
though the idle case is pretty trivial. The loading by different common
applications is a bit bigger job. As a result, I am uncomfortable
generalizing any results beyond the P4-M case.

Those qualifications stated, I'm starting to think that, pending the
completion of testing and implementation predictive power management, it's
best to use only the two "native" CPU speeds on my system and skip any
of the TCC based speeds except when doing something very CPU
intensive. 

It's going to take a lot of tweaking with a lot of knobs to really
optimize things. Sort of like converging an old color TV. If I only had
a bit more time to try things...Sigh.
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: oberman at es.net			Phone: +1 510 486-8634


More information about the freebsd-acpi mailing list