remote operation or admin

Erik Trulsson ertr1013 at
Wed Mar 19 20:48:48 UTC 2008

On Wed, Mar 19, 2008 at 02:03:54PM -0400, Chuck Robey wrote:
> Hash: SHA1
> Jeremy Chadwick wrote:
> > On Wed, Mar 19, 2008 at 01:01:45PM -0400, Chuck Robey wrote:
> >> What is most important in my considerations are, how might it to possible
> >> to stretch our present smp software to be able to extend the management
> >> domains to cover multiple computers?  Some sort of a bridge here, because
> >> there is no software today (that I'm awarae of, and that sure leaves a huge
> >> set of holes) that lets you manage the cores as separate computers) so that
> >> maybe today I might be able to have an 8 or 10 core system, and maybe
> >> tomorrow look at the economic and software possibility of having a 256 core
> >> system.  I figure that there would need to be some tight reins on latency,
> >> and you would want some BIGTIME comm links, I dunno, maybe not be able to
> >> use even Gigabit ethernet, maybe needing some sort of scsi bus linkage,
> >> something on that scale?  Or, is Fiber getting to that range yet?
> >>
> >> Anyhow, is it even remotely posible for us to be able to strech our present
> >> SMP software (even with it's limitation on word size to limit the range to
> >> 32 processors) to be able to jump across machines?  That would be one hell
> >> of a huge thing to consider, now wouldn't it?
> > 
> > Ahh, you're talking about parallel computing, "clustering", or "grid
> > computing".  The Linux folks often refer to an implementation called
> > Beowulf:
> > 
> >
> > 
> > I was also able to find these, more specific to the BSDs:
> > 
> >
> >
> >
> > 
> Well, I am, and I'm not, if you could answer me one quiestion, then I would
> probably know for sure.  What is the difference between our SMP and the
> general idea of clustering, as typified by Beowulf?  I was under the
> impression I was talking about seeing the possibility of moving the two
> closer together, but maybe I'm confused in the meanings?

The short version is that software written for SMP and for clusters
make very different assumptions on what operations are available
and the relative costs between them.  Software written for one
of them will typically either not run at all, or run very inefficiently
on the other.

Longer version:

SMP (Symmetric Multi Processing) refers to a situation where all the
CPUs involved are 'equal' and all use the same shared physical memory.
In an SMP system it does not really matter which CPU you run a program
on, since they are all equal and all have the same access to memory.
One important feature of such a system is that when one CPU writes to
memory all the others can see that write.

A close relative of SMP is NUMA (Non-Uniform Memory Access), with the
most popular variant being ccNUMA (cache coherent NUMA).  Here all CPUs
still share the memory, but different parts of memory can be differently
expensive to access depending on which CPU is involved.
(For example: CPU 1 might have fast access to memory area A, and slow access
to memory area B, while CPU 2 have fast access to B, and slow access to A.)
(Many multi-CPU machines in use today are actually ccNUMA even if they are
often called SMP.  Software written for SMP will typically run unmodified
on an ccNUMA system, although perhaps with somewhat suboptimal performance.)

In a cluster on the other hand the CPUs do not share physical memory.
They cannot automatically see each others memory operations. Any
communication between the CPUs must take place over the network (which
is much slower than the internal buses inside a computer.)

So in an SMP system communication between CPUs is fast, switching which
CPU is running a given program is a cheap and simple operation, and
much of the work in synchronizing the CPUs is taken care of automatically
by the hardware.
In a cluster communication between nodes is expensive, transferring a
program from one CPU to another is slow and complicated, and one needs to do
extra work to keep each CPU aware of what the others are doing.

In a clustered system one usually also has to take care of the case that
one node crashes or the network connection to it is broken.  In typical
SMP systems one just ignores the possibility of one CPU crashing.
(The exception being some mainframe systems with high redundancy where one
can replace almost all components (including CPUs) while the machine is
running.  There is a reason why these cost lots and lots of money.)

A system that is written to work in a clustered environment can fairly
easily be moved to run on an SMP machine, but it will do a lot of work
that is not necessary under SMP and thus not make very good use of the
Moving from SMP to cluster is more difficult.  One can emulate the missing
hardware support in software, but this has a very high overhead. Or one
can rewrite the software completely, which is a lot of work.

FreeBSD is written for SMP systems and makes many assumptions about the
capabilities of the underlying hardware. Modifying FreeBSD to run
efficiently and transparently on top of a clustered system would be a *huge*

<Insert your favourite quote here.>
Erik Trulsson
ertr1013 at

More information about the freebsd-hackers mailing list