FreeBSD Most wanted

Willie Viljoen will at
Thu Mar 4 12:56:14 PST 2004

----- Original Message -----
From: "Daniela" <dgw at>
To: "Jeremy C. Reed" <reed at>; <freebsd-advocacy at>
Sent: Thursday, March 04, 2004 8:04 PM
Subject: Re: FreeBSD Most wanted

>    o better compatibility
> I asked a lot of people what keeps them from dumping Windoze. The main
> for not switching is that they fear having to throw out their existing
> and apps with it.


Sadly, that is more complicated than it seems. The problem with perfect
Windows compatibility is that anything looking and working just like
Windows, to an application, would have to be Windows, or atleast, an exact
duplicate. The problem with exact duplicates is software patents. In
countries where Microsoft still hold certain software patents, particularly
pertaining to interaction between applications and other applications, or
applications and the operating system, emulating Windows so exactly, even
with the emulation built from completely different source code, just
"pretending" to be Windows, would still infringe on some patents.

More countries are moving to the possibility of software patents all the
time, this is a shift being driven by large corporations, and in many cases
the local Microsoft branch office, lobying with governments around the

Getting the kind of compatibility people really want would mean a sacrifice
too great for any operating system project. People wishing to use it in
countries where it infringes on patents would either have to license each
patent upon which it infringes, or would face lawsuits over the illegal use
of these patents. Basically, in order to achieve this compatibility, we
would need to sacrifice the very right to run FreeBSD in many countries in
the world.

What really needs to happen is that old hat software developers need to
catch up with the times. Developing cross platform software that will work
on many UNIX platforms, and on Windows, even Macintosh, without major code
rewriting or great cost has long been possible. Developers mostly just
ignore this possibility because they generally refuse to change to newer
ways of developing software, which involve designing applications, from the
outset, to be easily portable. The only thing needed for this is the right

There is an excellent case in point for development of cross platform
capable software. The ease with which the computer gaming industry can port
new titles between the traditional Windows PC platform, UNIX-based PCs and
the numerous game consoles on the market demonstrates the effectiveness of
cross platform designed software. In many cases, where the developers took
the short extra time to write a proper abstraction layer for the game, a
plug-in need only be developed for the abstraction layer that allows it to
interact with which ever API is available on the target platform.

Large game developers these days are finding it easy to release games that
run on pretty much anything. Instead of writing the game to interact
directly with an API, it is written to interact with an abstraction layer
providing all the functionality the game requires. Then, they need only
write a "driver", interfacing the abstraction layer with each API on each
platform. For instance, a DirectX driver to work on Windows, and MESA video
driver and standard UNIX /dev/dsp output for UNIX devices. Development of
these "drivers" is extremely quick. While it takes slightly longer to
develop an extensive abstraction layer, the reward, being able to port
effortlessly to any platform, greatly off-sets the cost and time usage for
development of the abstraction layer.

Developers of other applications can learn from this case and develop their
software to be easily portable, sadly, most executives still don't see it
that way, feeling that it is better to write software to interact directly
with which ever platform, thus in doing so, saving a small amount of time
and money and inadvertently helping Microsoft hold on to their PC monopoly.
Until that phylosophy changes, sadly, people will not move away from

Also, discarding old applications should never be seen as being such a bad
thing. This is also a question of having the right attitude. Applications,
by the very definition, become outdated or obsolete after a sufficient
period of time. Economic conditions may change, more data may need to be
collected, larger datasets may need to be handled. The traditional, cheap,
solution to this problem is what the enginering industry calls

To explain it in enginering terms, a building not up to current building
codes, being at risk of collapse due to earth quakes for instance, may be
wrapped in steel lattices to support and strengthen it. While it protects
the building from the quake hazzard, it also greatly reduces its
usefullness. The heavy lattices are aesthetically displeasing, limit
movement and visibility on outside portions of the building and generally
cause the building to be less desirable.

The obvious solution to this problem would be to condemn the building,
demolish it, and replace it with a building designed from the ground up to
be safe. Thus, the building can be practical and aesthetically pleasing
without needing to compromise on safety. Sadly, building owners seldom chose
this solution, option instead for the ugly "retrofitting", because it saves
time, and is slightly cheaper.

In many cases, IT management are inclined to go the same way as landlords.
"Retrofitting" an existing program, or, extending and enhancing the old
program to bring it up to date with new requirements is cheaper and takes
less time. Do not confuse this with the constant development of projects
like FreeBSD. Commercial applications, in almost all cases, tend to stagnate
in terms of development. This leads to an eventual situation where the
application will begin to lag so far behind current trends that it may
suddenly become unusable.

The obvious solution, again, would be to throw out the application, and
redesign it from the ground up to be in line with current requirements.
However, in most cases, the old programs merely get code segments added on
to them, usually in a fragmented way, by different developers, at different
points in time, using different techniques perculiar to the time. This is
what programmers call "kruft", new code layered onto older code layered onto
older code layered onto older code (infinite recursion), which eventually
leads to a source code base which is impossible to maintane, full of
security holes, has numerious hidden bugs from "sedimentary layers" below
the newer code and in many cases, running slow, unoptimized, unstable and
wich does not scale well.

"Krufty" applications pervade the computer industry. Instead of paying a
little bit more for a new application, capable of meeting current
requirements, management simply chose to stack endless layers of cruft on
top of each other. Here, a fantastic case in point is Microsoft's (now
finally dead) Windows 95, 98 and Me range of operating system products. The
original Windows 95 GUI, sat on top of MS-DOS. The excuse provided for this
was that it would allow Windows 95 to be compatible with older applications.
The real reason is simple, someone was too lazy or too stingy to develop a
new bootstrap loader and file system.

That bootstrap loader and file system did come, in the form of Windows NT
and NTFS. Microsoft presented the world with a new, more powerful file
system and a Windows which effectively was in controle of the system right
from boot time.

Yet, inspite of this, three years later, Windows 98 was released. Still
sitting atop MS-DOS, and still using the FAT file system, although it had
then been extended to support 32 bit addressing, and still adding the VFAT
layer above FAT32 to allow long file names. The same applies for Windows Me.
Under the guise of backward compatibility, Microsoft were still storing
users' data on a krufty extension of a file system first developed 15 years
earlier, despite FAT having proven itself to be succeptible to data loss,
corruption and all kinds of other nasty misshaps.

Also, even up to Windows Me, where it has been hidden brilliantly, the same
old 16 bit real mode environment still booted these systems up. Only after
the GUI module started to load, did the system switch to a form of
pseudo-protected mode 32 bit memory management.

End of the ranting. The point I am trying to make is that the fundamental
problem here is the attitude of the market. The kruft has got to go. An
excellent case in point is XFree86. The version currently in mainstream use,
XFree86 4, is a total, ground-up redesign of the system. Amongst the
improvements is the DRI (direct rendering interface) system, which allows X
and MESA to render directly to a 3D accelerator. Several attempts were made
at interfacing XFree86 3 with 3D accelerators, these will all be remembered
as resounding failures, barring a few little known projects still under
development or which did work, but only with some hardware. With the arrival
of the redesigned XFree86 4, problems interfacing with new hardware and
employing new techniques that had plagued XFree86 3, simply dissapeared over
night. XFree86 3 had expired, and inspite of many years of faithful service
to the UNIX community, seeing it go, replaced by a newer, more up to date,
much better system, was a joyous day for me.

In short, the longer time period, and in commercial applications, slightly
higher cost of throwing out old, obsolete software, and replacing it with
software designed from the outset to do what is needed, is well worth it.

Sadly, old hat management seldom seem to see it that way, so the kruft will
be around as long as the computer, just as retrofitted buildings will be
around to spoil the skylines of great cities for centuries to come.


More information about the freebsd-advocacy mailing list