Marcel Moolenaar xcllnt at
Wed Aug 26 16:40:03 UTC 2009

On Aug 26, 2009, at 5:06 AM, Bruce Evans wrote:

>>> Everything is in place to remove 0.1% of the coupling.  Debugger i/o
>>> still normally goes to the same device as user and kernel i/o, so it
>>> is strongly coupled.
>> That's a non sequitur. Sharing travel destinations does
>> not mean that you travel together, for example.
> The coupling here occurs at the destination.

Exactly: that's why I said everything is in place to change the
destination of printf().

>> Having printf() not even write to the console does not
>> mean that the debugger cannot keep using the low-level
>> console interfaces...
> It just means that printf() would be slightly broken (no longer
> synchronous, and unusable in panic()...).

printf not being synchronous is actually what solves all
the complexity. We don't need synchronous output in the
normal case. Only for the "broken" case (i.e. kernel
panic, no root FS), do we need synchronous output. It's
the exceptional case.

I belief common sense tells us to optimize for the common

> Note that strong coupling is simplest here.

I disagree. We've had various threads on this topic and
they had the same theme: "we have this interlock and it's
a problem. Help!"

I belief that trying to solve the problem within the
existing framework is not the solution, because I belief
the framework itself is the problem. Rethinking it from
the bottom-up helps to detangle and come up with a good

>  If debugger i/o is in a
> separate module then it has a hard time even knowing the interrupted
> state.  One impossibly difficult weakly-coupled case is when normal
> i/o is done by a propietary X driver using undocumented hardware
> features from userland, with some undocumented features active at the
> time of the interrupt.

The question is: why try so hard to solve a problem that's
specific to a case we all try our best to avoid? Isn't it
much easier to say that debugger output and console are not
the same so that you can run X on syscons and DDB over a
serial interface and if all else fails: dump a kernel core
and analyze the state offline.

Having an in-kernel debugger is great, but it should be
kept at "arms length" as much as possible. The moment you
start sharing interfaces or mixing functionality you're
setting yourself up for failure: either the debugger does
not work in certain cases (running X is a perfect example
of how the in-kernel debugger is totally useless) or you
complicate the kernel unnecessarily.

>  Non-debugger console i/o is also impossibly
> difficult in this case.  FreeBSD doesn't attempt to support it, and
> uses cn*avail* interfaces to give null i/o and less than null ddb
> support.  With all the i/o in the kernel, it is possible to guess the
> hardware and driver state by peeking at driver variables and hardware
> registers.  With strong coupling, it is possible to do this robustly.

That's not true. There's no robust way for the kernel debugger
to use hardware that is under the console of process space.
If anything: output is always interrupted and disrupted by the
debugger, so even if the hardware is left in a consistent state,
the actual content on the screen may be garbled.

> Upper layers must cooperate by recording enough of their state in an
> atomic way.  The coupling in lower layers then consists of using the
> records and knowing that they are sufficient.

Upper layers include user space in some cases. The state of the
3D graphics accelerator is not something you want to have to worry
about in the kernel. Though, you do want to know the "mode" if
you want to write to the frame buffer. Graphical displays is our
weakest point and given that there's no interest in fixing it,
I can say that no matter what we do in the existing framework we
will never have robust behaviour.

Just my $0.02 of course...

Marcel Moolenaar
xcllnt at

More information about the freebsd-arch mailing list