Bakul Shah bakul at BitBlocks.com
Sat Jul 2 00:38:27 GMT 2005

> > A couple FS specific suggestions:
> > - perhaps clustering can be built on top of existing
> >   filesystems.  Each machine's local filesystem is considered
> >   a cache and you use some sort of cache coherency protocol.
> >   That way you don't have to deal with filesystem allocation
> >   and layout issues.
>
> I see - that's an interesting idea.  Almost like each machine could
> mount the shared version read-only, then slap a layer on top that is
> connected to a cache coherency manager (maybe there is a daemon on each
> node, and the nodes sync their caches via the network) to keep the
> filesystems 'in sync'.  Then maybe only one elected node actually writes
> the data to the disk.  If that node dies, then another node is elected.

\begin{handwaving}
What I was thinking of:
- The cluster system assures that there are atleast N copies
of every file at N+ separate locations.
- More than N copies may be cached dependign on usage pattern.
- any node can write.  The system takes care of replication
and placement.
- meta data, directories are implemented *above* this level.
- more likely you'd want to map file *fragments* to local
files so that a file can grow beyond one disk and smaller
fragements mean you don't have to cache an entire file.
- you still need to mediate access at file level but this
is no different from two+ processes accessing a local file.
Of course, the devil is in the details!

> > - a network wide stable storage disk' may be easier to do
> >   given GEOM.  There are atleast N copies of each data block.
> >   Data may be cached locally at any site but writing data is
> >   done as a distributed transaction.  So again cache
> >   coherency is needed.  A network RAID if you will!
>
> I'm not sure how this would work.  A network RAID with geom+ggate is
> simple (I've done this a couple times - cool!), but how does that get me

What I had in mind something like this: Each logical block is
backed by N physical blocks at N sites.  Individual
filesystems live in partitions of this space.  So in effect
you have a single NFS server per filesystem that deals with
be faster.  When a server goes down, another server can be
elected.

> :) I understand.  Any nudging in the right direction here would be
> appreciated.

it maps to a sequence of disk blocks (*without* using any
code or worrying about details of formats but capturing the
essential elements).  I'd describe various operations in
terms of preconditions and postconditions.  Then, I'd extend
the model to deal with redundancy and so on.  Then I'd model
various failure modes. etc.  If you are interested _enough_
we can take this offline and try to work something out.  You
may even be able to use perl to create an executable'
specification:-)
\end{handwaving}