remote operation or admin
mwm-keyword-freebsdhackers2.e313df at mired.org
Thu Mar 20 04:06:00 UTC 2008
On Wed, 19 Mar 2008 12:31:24 -0700
Julian Elischer <julian at elischer.org> wrote:
> Chuck Robey wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> > Jeremy Chadwick wrote:
> >> On Wed, Mar 19, 2008 at 02:03:54PM -0400, Chuck Robey wrote:
> >>> Well, I am, and I'm not, if you could answer me one quiestion, then I would
> >>> probably know for sure. What is the difference between our SMP and the
> >>> general idea of clustering, as typified by Beowulf? I was under the
> >>> impression I was talking about seeing the possibility of moving the two
> >>> closer together, but maybe I'm confused in the meanings?
> >> SMP as an implementation is mainly intended for single systems with
> >> multiple processors (e.g. multiple physical CPUs, or multiple cores;
> >> same thing). It distributes kernel operations (kernel threads) across
> >> those processors, rather than only utilising a single processor.
> >> Clustering allows for the distribution of a task (a compile using gcc,
> >> running of certain disk I/O tasks, running multiple userland (or I
> >> suppose kernel, if the kernel had clustering support) threads) across
> >> multiple physical computers on a local network.
> >> The best example I have for real-world clustering is rendering (mostly
> >> 3D, but you can "render" anything; I'm referring to 3D in this case).
> >> A person doing modelling creates a model scene using 3D objects, applies
> >> textures to it, lighting, raytracing aspects, vertex/bones animation,
> >> and anything else -- all using their single workstation. Then the
> >> person wants to see what it all looks like -- either as a still frame
> >> (JPEG/PNG/TIFF), or as a rendered animation (AVI/MPG/MJPEG).
> >> Without any form of clustering, the workstation has to do all of the
> >> processing/rendering work by its lonesome self. This can take a very,
> >> very long time -- modellers aren't going to wait 2 hours for their work
> >> to render, only to find they messed up some bones vertexes half way into
> >> the animation.
> >> With clustering, the workstation has the capability to send the
> >> rendering request out onto the network to a series of what're called
> >> "slaves" (other computers set up to handle such requests). The
> >> workstation says "I want this rendered. I want all of you to do it".
> >> Let's say there's 200 machines in the cluster as slaves, and let's say
> >> all 200 of those machines are dual-core (so 400 CPUs total). You then
> >> have 400 CPUs rendering your animation, versus just 2 on the
> >> workstation.
> >> The same concept can apply to compiling (gcc saying "I want this C file
> >> compiled" or whatever), or any other "distributed computing"
> >> computational desired. It all depends on if the software you want to
> >> support clustering can do it.
> >> Different clustering softwares run at different levels; some might act
> >> as "virtual environments", thus underlying software may not need to know
> >> about clustering (e.g. it "just works"); others might require each
> >> program to be fully cluster-aware.
> >> Make sense? :-)
> > Not completely yet (I tend to be stubborn, if I carry this too far, tell me
> > in private mail and I will politely drop it). Your use cases show me the
> > differences in size, and *because* of the size, the differences in how
> > you'd use them, and that part I did already know. I'm perfectly well aware
> > of the current differences in size, but what I'm after is what are the real
> > differences, ignoring size, in what they actually accomplish, and how they
> > go about doing it. I'm thinking of the possibility of perhaps finding it
> > it might be possible to find some way to extend the work domain of an smp
> > system to stretch across machine lines, to jump across motherboards. Maybe
> > not to be global (huge latencies scare me away), but what about just going
> > 3 feet, on a very high speed bus, like maybe a private pci bus? Not what
> > is, what could be?
> > And, I have experienced just as many looniees as you have, who ask for some
> > gigantic task, then sit back and want to take the credit and act like a
> > cheerleader, figuring they really got the thing going. Well, I DON'T
> > really want the help, I have in mind a project for me and a friend, with
> > small smallish machine resources, maybe a bunch of small ARM boards. I
> > wouldn't turn down help, but I'm not really proselytizing. Something
> > small, but with a bunch of bandwidth. So, in that case, what really are
> > the differences between smp and clustering, besides the raw current size of
> > the implementation? Are there huge basic differences between the
> > clustering concept and by smp's actual tasks?
> The difference is in the "S" in SMP.
I think you're grabbing the wrong difference. Loosing the symmetry in
an SMP system leaves you with an asymmetric MP system, not a
cluster. You can do asymmetric MP on a single box. Most early Unix MP
implementations were such, as the easiest thing to implement.
> In SMP all memory (and other resources) is equally available to all
> processors and threads and processes can migrate around without
> special regard for location. File descriptors and data are equally
> available for all threadds of a process independenly of which CPU thye
> are on.. etc. etc.
This is true in some non-S MP system as well. What's different is
that the kernel divides the work up between the processor(s)
asymmetrically; typically that interrupts and kernel code run on only
one processor (hence it's not symmetric).
> Some resources are local. Some are not. threads running in a process
> can only do so when they have access to all the resources of the
> process. processes MAY be able to migrate, but it is a complex
> operation that involves process snapshotting and transportation
> of the snapshot etc.
Some of them, anyway. Even worse, operations on those with the extra
layer can't be locked locally, but have to be locked on the rest of
the systems as well. This can lead to some really strange results
(i.e. - it's faster to ship thousands of multi-megabyte files per hour
across a network link than renaming them between directories on a SAN,
because the SAN has to lock the two directories on all systems to do a
Mike Meyer <mwm at mired.org> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
More information about the freebsd-hackers