Questions about geom-gate and RAID1/10 and CARP

Eric Anderson anderson at centtech.com
Thu Dec 15 07:33:57 PST 2005


Michael Schuh wrote:

>Hello Eric,
>Hello list,
>
>oh my first, or better primary opinion was to have two
>mostly identically Machines, that uses ggated and gvinum
>to have an redundant system where it looks like one Machine to
>the public side access. But later i have realized that this whis
>can't run with ufs or ufs2 while it is not really good to have one filesystem
>mounted rw on both machines, so that i think i must have one Writer
>and two listeners. But you are right, it is also impossible to mount
>an Filesystem ro while another machine write things to that FS.
>
>So the only good concept is to have one writer and another (or two)
>listeners, there listens over NFS.
>But this throws my wish completely away to the stars.
>  
>



>Only FS that i know that have these functionality is the really young ZFS
>from Solaris. Or do it give another solution for my whish?
>  
>

There are others, like lustre, gfs, polyserve, etc, however none of them 
work in FreeBSD at this point.  A few people (including myself) have 
started a project to port gfs to FreeBSD (gfs4fbsd project on 
sourceforge). 


>I would paint it to remember what my target was:
>2 Machines A and B, booth mostly identical hardware and OS (RELENG_6)
>both machines have one private interface that is cross bind to the other machine
>for updates on the FS and Monitoring the other.....
>the public interfaces of both machines should get bound to one public
>address with CARP and pfsync, so that clients see only one Machine.
>And last, my wish are the RAID1/10 do the sync of both machines
>automagically........hope you understand what my target is.
>

I'm wondering actually if you couldn't actually do this with NFS, and 
hacking some pieces together.  Haven't thought through it, but seems 
like maybe you could make the active writer an nfs server, that also 
mounts it's own nfs share rw, but the nfs sharing would be on a virtual 
interface, or at least the one that 'moves' with your failover.  The 
other machines would mount that nfs server's export ro, and when it 
fails over, the one taking over would have to run a script to begin 
serving that export rw to all, and it's own client would continue its 
connection but now on it's new virtual interface.  You'd also have to 
have the ggate stuff set up, so that it was mirroring the original 
'master' disk, but when the failover occurred, you would quickly mount 
your local mirrored disk rw, ignoring the 'unclean' message, begin a 
background fsck, then start the nfs server on that mount point.  You 
would probably also have to fail the original drive in the mirror to 
effectively 'fence' that node from making disk changes at the same time 
the new master did. 

Eric




-- 
------------------------------------------------------------------------
Eric Anderson        Sr. Systems Administrator        Centaur Technology
Anything that works is better than anything that doesn't.
------------------------------------------------------------------------



More information about the freebsd-geom mailing list