remote operation or admin

Julian Elischer julian at
Wed Mar 19 20:03:59 UTC 2008

Chuck Robey wrote:
> Hash: SHA1
> Jeremy Chadwick wrote:
>> On Wed, Mar 19, 2008 at 02:03:54PM -0400, Chuck Robey wrote:
>>> Well, I am, and I'm not, if you could answer me one quiestion, then I would
>>> probably know for sure.  What is the difference between our SMP and the
>>> general idea of clustering, as typified by Beowulf?  I was under the
>>> impression I was talking about seeing the possibility of moving the two
>>> closer together, but maybe I'm confused in the meanings?
>> SMP as an implementation is mainly intended for single systems with
>> multiple processors (e.g. multiple physical CPUs, or multiple cores;
>> same thing).  It distributes kernel operations (kernel threads) across
>> those processors, rather than only utilising a single processor.
>> Clustering allows for the distribution of a task (a compile using gcc,
>> running of certain disk I/O tasks, running multiple userland (or I
>> suppose kernel, if the kernel had clustering support) threads) across
>> multiple physical computers on a local network.
>> The best example I have for real-world clustering is rendering (mostly
>> 3D, but you can "render" anything; I'm referring to 3D in this case).
>> A person doing modelling creates a model scene using 3D objects, applies
>> textures to it, lighting, raytracing aspects, vertex/bones animation,
>> and anything else -- all using their single workstation.  Then the
>> person wants to see what it all looks like -- either as a still frame
>> (JPEG/PNG/TIFF), or as a rendered animation (AVI/MPG/MJPEG).
>> Without any form of clustering, the workstation has to do all of the
>> processing/rendering work by its lonesome self.  This can take a very,
>> very long time -- modellers aren't going to wait 2 hours for their work
>> to render, only to find they messed up some bones vertexes half way into
>> the animation.
>> With clustering, the workstation has the capability to send the
>> rendering request out onto the network to a series of what're called
>> "slaves" (other computers set up to handle such requests).  The
>> workstation says "I want this rendered.  I want all of you to do it".
>> Let's say there's 200 machines in the cluster as slaves, and let's say
>> all 200 of those machines are dual-core (so 400 CPUs total).  You then
>> have 400 CPUs rendering your animation, versus just 2 on the
>> workstation.
>> The same concept can apply to compiling (gcc saying "I want this C file
>> compiled" or whatever), or any other "distributed computing"
>> computational desired.  It all depends on if the software you want to
>> support clustering can do it.
>> Different clustering softwares run at different levels; some might act
>> as "virtual environments", thus underlying software may not need to know
>> about clustering (e.g. it "just works"); others might require each
>> program to be fully cluster-aware.
>> Make sense?  :-)
> Not completely yet (I tend to be stubborn, if I carry this too far, tell me
> in private mail and I will politely drop it).  Your use cases show me the
> differences in size, and *because* of the size, the differences in how
> you'd use them, and that part I did already know.  I'm perfectly well aware
> of the current differences in size, but what I'm after is what are the real
> differences, ignoring size, in what they actually accomplish, and how they
> go about doing it.  I'm thinking of the possibility of perhaps finding it
> it might be possible to find some way to extend the work domain of an smp
> system to stretch across machine lines, to jump across motherboards.  Maybe
> not to be global (huge latencies scare me away), but what about just going
> 3 feet, on a very high speed bus, like maybe a private pci bus?  Not what
> is, what could be?
> And, I have experienced just as many looniees as you have, who ask for some
> gigantic task, then sit back and want to take the credit and act like a
> cheerleader, figuring they really got the thing going.  Well, I DON'T
> really want the help, I have in mind a project for me and a friend, with
> small smallish machine resources, maybe a bunch of small ARM boards.  I
> wouldn't turn down help, but I'm  not really proselytizing. Something
> small, but with a bunch of bandwidth.  So, in that case, what really are
> the differences between smp and clustering, besides the raw current size of
> the implementation?   Are there huge basic differences between the
> clustering concept and by smp's actual tasks?

The difference is in the "S"  in SMP.

In SMP all memory (and other resources) is equally available to all
processors and threads and processes can migrate around without
special regard for location.  File descriptors and data are equally 
available for all threadds of a process independenly of which CPU thye 
are on.. etc. etc.

In clustering, you basically have to have an extra layer of 
indirectinb between the working processor and teh resources.

Some resources are local. Some are not. threads running in a process
can only do so when they have access to all the resources of the
process. processes MAY be able to migrate, but it is a complex 
operation that involves process snapshotting and transportation
of the snapshot etc.

Matt in DragonFly is trying to make his system suitable for
clustering. This is not hte same as making it suitable for SMP
though there are of course some common requirements. He's doing SMP
as well though.

> Version: GnuPG v2.0.4 (FreeBSD)
> Comment: Using GnuPG with Mozilla -
> iD8DBQFH4WUUz62J6PPcoOkRAoJNAJ4o4jpDeVlLG2X6Sk1ccggORyO2GwCgku/s
> U3w/oylwEgt8CufFZwccfw4=
> =zbNA
> _______________________________________________
> freebsd-hackers at mailing list
> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe at"

More information about the freebsd-hackers mailing list