tlambert2 at mindspring.com
Wed Apr 9 05:29:57 PDT 2003
Bernd Walter wrote:
> I need a realtime behavour in the (-current) kernel with 1ms
> resolution and a presision of 500us.
> I thought about these two ways:
> - use timeout(9), but it seems that on i386 we only have a
> resolution of 10ms.
> And I don't know of what presision quality I can expect.
> Can the resolution changed to 1ms as we have on alpha?
> - attach to the clock service routine.
> I asume the presision will be good enough.
> But how can I find out the resolution on a given hardware?
> What is the best way to solve the problem?
You can increase the resolution by increasing HZ. The 10ms
resolution comes from HZ=100; at 100 cycles per second, you
can not expect a timer to fire more than 10 times a second,
which yields a 10mS resolution. Seeting HZ=1000 yields your
Increasing this value will probably not help your _precision_,
however; certainly, it won't put it at 500uS. For a precision
of 500uS at a resolution of 1mS (i.e. "1mS +/- 500uS"), you
will need an internal timer resolution of 500uS, e.g. HZ=2000.
In addition to this, you will have to arrange for a smaller
value of kern.quantum in SCHED_4BSD. It may not even be at
all possible to do what you want, using SCHED_ULE, which does
not provide for "kern.quantum" to be adjusted, and has some
affinity code that could cause additional latency, if switching
from another task to your task, if the former task had multiple
threads in a ready-to-run state.
However, on PC hardware, if this is a hard deadline, you are
even worse off when using FreeBSD, since PC hardware is only
technically capable of supporting a single hard RT task at a
time, even with deadlining. There are several reasons for this:
1) Interrupt processing does not run in bounded time on
FreeBSD; in other words, no one has bothered to ensure
an interrupt will "take no longer than XX uS to process".
2) Interrupt processing has an implicit priority which is
higher than the highest available RT priority for a user
space application using timers ("rtprio"). This means
that interrupt processing can "livelock" a user process
from ever running.
3) When moving from hardware interrupt processing back to
an interrupted user application, software interrupts
are run; same effect as hardware interrupts.
4) Clock processing. Clock processing occurs every "tick",
and is non-deterministically bounded for the amount of
time it can take to process.
The net effect of all these things is that the _resolution_ is
negatively impacted by some bounded value, and the _precision_
is negatively impacted by some unbounded value.
The best you can hope for is to reduce the internal resolution
sufficiently. It will have little impact on externally visible
resolution below your minimum interval, but it will impact your
precision considerably. The intent here is to ensure that your
"noise" is small enough that:
interval + fixed_delta + variable_delta + noise
ends up being less than 1mS + 500uS.
You also have to avoid any load level that comes anywhere near a
level sufficient to result in a livelock situation. This is very
hard, if your platform has networking support, and is on the open
internet, rather than having some hardware in front of it which
can shed load on it's behalf, yet still not impact normal load
processing (in general, this means a stateful firewall that lets
in packets for established connections, but no other packets, e.g.
a Cisco PIIX).
For most applications with this level of resolution requirement,
it's probably worthwhile to write them as a driver, in the kernel,
and avoid the software interrupt and user space scheduling issues.
More information about the freebsd-hackers