remote operation or admin

Chuck Robey chuckr at chuckr.org
Wed Mar 19 17:07:10 UTC 2008


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jeremy Chadwick wrote:
> On Mon, Mar 17, 2008 at 08:43:49PM -0400, Chuck Robey wrote:
>> I have 4 computers, 1 big FreeBSD-current (4 x86 procs), 2 GentooLinux (1
>> is a dial AMD Opteron, the other a dual older x86), and 1 MacOSX (dual
>> PPC).  I was thinking about looking for two items, I'm not sure if I want
>> one or both of them: either some software to let me merely remotely manage
>> them (public software, mind) or, even better, something to get these
>> disparate hardwares to be able to work together, and (as much as possible)
>> to be able to share work.
>>
>> What might be the best, in terms of ability, and especially the ability to
>> make these work together?  If they're not a FreeBSD port, as long as
>> they're reasonably stable, I don't mind porting things, but it needs to be
>> stable on all those CPUs.  Could you reo\commend me something?  I'll go
>> chase each one down, I won't jump on you if you're wrong, gimme your
>> guesses, ok?
> 
> I don't understand your question.  It's almost like you're asking two
> questions:
> 
> 1) How can I manage all of these machines remotely?  (E.g. ssh, VNC,
> serial console, KVM, etc.)
> 
> 2) How can I harness the power of all of these machines simultaneously?
> (e.g. some form of CPU clustering)
> 
> Can you elaborate?
> 

Sure.  I suppose what was really driving me was a little daydreaming that
I'd been having, about the direction of computing, in the coming few years.
 Not 20 years ahead, but more likely 2 or 4 years ahead.  I was thinking
that the present wave towards constructing single-ships with multiple cores
on them could only grow in scale, so the real question would be, just how
large could "n" get?  Then, IF that were true, it begs another question ...
right now, we rely upon our fine smp code to do a fairly good job of both
managing the administration of those cores, AND the dispatching of jobs
amongst them.

By now, you've probably smelled the rat: I didn't allow for the difference
in hardware that I'd painted: 2 x86 cores, 6 x86_64 cores, and 2 PPC cores.
 Well, I  had to throw all that in if I were to reach for the sky, but I
was asking for all that I could possibly get, but not, honestly, unwilling
to consider something maybe a bit less than that, if it were easier to
consider.

What is most important in my considerations are, how might it to possible
to stretch our present smp software to be able to extend the management
domains to cover multiple computers?  Some sort of a bridge here, because
there is no software today (that I'm awarae of, and that sure leaves a huge
set of holes) that lets you manage the cores as separate computers) so that
maybe today I might be able to have an 8 or 10 core system, and maybe
tomorrow look at the economic and software possibility of having a 256 core
system.  I figure that there would need to be some tight reins on latency,
and you would want some BIGTIME comm links, I dunno, maybe not be able to
use even Gigabit ethernet, maybe needing some sort of scsi bus linkage,
something on that scale?  Or, is Fiber getting to that range yet?

Anyhow, is it even remotely posible for us to be able to strech our present
SMP software (even with it's limitation on word size to limit the range to
32 processors) to be able to jump across machines?  That would be one hell
of a huge thing to consider, now wouldn't it?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH4Ub5z62J6PPcoOkRAl9qAJ0fROAdNq/QFFR+7cYix0BvwjGZhQCeIsUH
fAWfi7eKgNogAF6uaKpgMoI=
=ZIcm
-----END PGP SIGNATURE-----


More information about the freebsd-hackers mailing list