Trusted Code Base in a UNIX Environment
Robert Watson
rwatson at FreeBSD.org
Mon Apr 17 20:58:52 GMT 2000
Since there has been susbtantial interest in the TrustedBSD design and
architecture, I thought I'd send out a series of messages over the course
of a couple of weeks breaking up what remain open questions in the design
of TrustedBSD. The first is: how does the concept of s Trusted Computing
Base fit into a UNIX-style operating system? For reference, here's the
definition of the TCB from the Orange Book:
6.3 THE TRUSTED COMPUTING BASE
In order to encourage the widespread commercial availability of trusted
computer systems, these evaluation criteria have been designed to address
those systems in which a security kernel is specifically implemented as well
as those in which a security kernel has not been implemented. The latter
case includes those systems in which objective (c) is not fully
supported because of the size or complexity of the reference validation
mechanism. For convenience, these evaluation criteria use the term
Trusted Computing Base to refer to the reference validation mechanism,
be it a security kernel, front-end security filter, or the entire
trusted computer system.
The heart of a trusted computer system is the Trusted Computing Base (TCB)
which contains all of the elements of the system responsible for
supporting the security policy and supporting the isolation of objects
(code and data) on which the protection is based. The bounds of the TCB
equate to the "security perimeter" referenced in some computer security
literature. In the interest of understandable and maintainable
protection, a TCB should be as simple as possible consistent with the
functions it has to perform. Thus, the TCB includes hardware, firmware,
and software critical to protection and must be designed and implemented
such that system elements excluded from it need not be trusted to
maintain protection. Identification of the interface and elements of the
TCB along with their correct functionality therefore forms the basis for
evaluation.
For general-purpose systems, the TCB will include key elements of the
operating system and may include all of the operating system. For
embedded systems, the security policy may deal with objects in a way
that is meaningful at the application level rather than at the operating
system level. Thus, the protection policy may be enforced in the
application software rather than in the underlying operating system. The
TCB will necessarily include all those portions of the operating system
and application software essential to the support of the policy. Note
that, as the amount of code in the TCB increases, it becomes harder to
be confident that the TCB enforces the reference monitor requirements
under all circumstances.
So a TCB is a very useful concept, but it's not immediately clear how
various aspects of UNIX-style operating systems do or do not fit into the
TCB. With a monolithic kernel design, it is clear that the kernel should
be entirely within the TCB. Given the presence of dynamically linked
kernel modules, it's also clear that any modules that are permitted to be
linked into the kernel must also fall into the TCB (as well as supporting
infrastructure). Then there's the issue of the plethora of userland
binaries, libraries, and supporting file system and files. Figuring out
where to draw this boundary is important from the perspective of
understanding how the components fit together, and performing a
comprehensive security design and analysis.
If components of the TCB are responsible for authentication- and
auditing-related activities, as well as protection of components of the
TCB, this would suggest that a far swack of the base operating system
binaries and libraries should be part of the TCB from a protection point
of view. For example, most scripts involved in booting, binaries running
with any substantial privilege, and so on.
This is still not very precise, so I guess I'd like to consider this from
a number of perspectives: first, how does the concept of a TCB play out in
the Orange Book requirements. Second, if a well-defined TCB is required
(whether for Orange Book reasons, or because it's just a good idea), what
procedure should be used for determining which system components should be
part of the TCB? Third, what protection is required for elements of the
TCB, and how does this map into the various access control ingredients we
can assume will be present (existing uid, gid protections,
kernel/securelevel protection, as well as enhancements such as mandatory
access control, capabilities, ACLs, et al). Fourth, what implications does
this have from the perspective of further code implementation, auditing,
and design?
There are presumably more interesting questions, as well as a variety of
ways we can look for answers. One important step is to try and take
advantage of work done in this past: how have other trusted operating
system implementations solved this problem? What difficulties have they
had in imposing a TCB on a UNIX-like operating system? Is it as easy as
determining that any code run by the root user is part of the TCB, or are
least privileged techniques such as Capabilities used to restrict this
further? BSD provides securelevels as a limited form of system integrity
MAC policy--is this something we can build on, or should discard in favor
of a stronger and more consistent mechanism such as a general MAC
implementation? What mistakes have been made that we can improve on to
come up with a cleaner (and more secure) design?
I don't pretend to have thought out all the implications of these
questions, and discussion of the issues on this list would be welcome :-).
Robert N M Watson
robert at fledge.watson.org http://www.watson.org/~robert/
PGP key fingerprint: AF B5 5F FF A6 4A 79 37 ED 5F 55 E9 58 04 6A B1
TIS Labs at Network Associates, Safeport Network Services
To Unsubscribe: send mail to majordomo at trustedbsd.org
with "unsubscribe trustedbsd-discuss" in the body of the message
More information about the trustedbsd-discuss
mailing list