HAST + ZFS + NFS + CARP

Julien Cigar julien at perdition.city
Fri Jul 1 10:57:41 UTC 2016


On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotteswinter wrote:
> Am 01.07.2016 um 12:15 schrieb Julien Cigar:
> > On Fri, Jul 01, 2016 at 11:42:13AM +0200, InterNetX - Juergen Gotteswinter wrote:
> >>>
> >>> Thank you very much for those "advices", it is much appreciated! 
> >>>
> >>> I'll definitively go with iSCSI (for which I haven't that much 
> >>> experience) over HAST.
> >>
> >> good luck, i rather cut one of my fingers than using something like this
> >> in production. but its probably a quick way if one targets to find a new
> >> opportunity ;)
> > 
> > why...? I guess iSCSI is slower but should be safer than HAST, no?
> 
> do your testing, please. even with simulated short network cuts. 10-20
> secs are way enaugh to give you a picture of what is going to happen

of course I'll test everything properly :) I don't have the hardware yet
so ATM I'm just looking for all the possible "candidates", and I'm 
aware that a redundant storage is not that easy to implement ...

but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI), 
either zfs send|ssh zfs receive as you suggest (but it's
not realtime), either a distributed FS (which I avoid like the plague..)

> 
> >>
> >>>
> >>> Maybe a stupid question but, assuming on the MASTER with ada{0,1} the 
> >>> local disks and da{0,1} the exported iSCSI disks from the SLAVE, would 
> >>> you go with:
> >>>
> >>> $> zpool create storage mirror /dev/ada0s1 /dev/ada1s1 mirror /dev/da0
> >>> /dev/da1
> >>>
> >>> or rather:
> >>>
> >>> $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1
> >>> /dev/da1
> >>>
> >>> I guess the former is better, but it's just to be sure .. (or maybe it's
> >>> better to iSCSI export a ZVOL from the SLAVE?)
> >>>
> >>
> >> are you really sure you understand what you trying to do? even if its
> >> currently so, i bet in a desaster case you will be lost.
> >>
> >>
> > 
> > well this is pretty new to me, but I don't see what could be wrong with:
> > 
> > $> zpool create storage mirror /dev/ada0s1 /dev/da0 mirror /dev/ada1s1
> > /dev/da1
> > 
> > Let's take some use-cases:
> > - MASTER and SLAVE are alive, the data is "replicated" on both
> >   nodes. As iSCSI is used, ZFS will see all the details of the
> >   underlying disks and we can be sure that no corruption will occur
> >   (contrary to HAST)
> > - SLAVE die, correct me if I'm wrong the but pool is still available,
> >   fix the SLAVE, resilver and that's it ..?
> > - MASTER die, CARP will notice it and SLAVE will take the VIP, the
> >   failover script will be executed with a $> zpool import -f
> > 
> >>> Correct me if I'm wrong but, from a safety point of view this setup is 
> >>> also the safest as you'll get the "fullsync" equivalent mode of HAST
> >>> (but but it's also the slowest), so I can be 99,99% confident that the
> >>> pool on the SLAVE will never be corrupted, even in the case where the
> >>> MASTER suddently die (power outage, etc), and that a zpool import -f
> >>> storage will always work?
> >>
> >> 99,99% ? optimistic, very optimistic.
> > 
> > the only situation where corruption could occur is some sort of network
> > corruption (bug in the driver, broken network card, etc), or a bug in
> > ZFS ... but you'll have the same with a zfs send|ssh zfs receive
> > 
> >>
> 
> optimistic
> 
> >> we are playing with recovery of a test pool which has been imported on
> >> two nodes at the same time. looks pretty messy
> >>
> >>>
> >>> One last thing: this "storage" pool will be exported through NFS on the 
> >>> clients, and when a failover occur they should, in theory, not notice
> >>> it. I know that it's pretty hypothetical but I wondered if pfsync could
> >>> play a role in this area (active connections)..?
> >>>
> >>
> >> they will notice, and they will stuck or worse (reboot)
> > 
> > this is something that should be properly tested I agree..
> > 
> 
> do your testing, and keep your clients under load while testing. do
> writes onto the nfs mounts and then cut. you will be surprised about the
> impact.
> 
> >>
> 
> 
> 
> >>> Thanks!
> >>> Julien
> >>>
> >>>>
> >>>>>>>> ZFS would then know as soon as a disk is failing.
> >>>>>>>> And if the master fails, you only have to import (-f certainly, in case of a master power failure) on the slave.
> >>>>>>>>
> >>>>>>>> Ben
> >>>> _______________________________________________
> >>>> freebsd-fs at freebsd.org mailing list
> >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> >>>
> > 
> 

-- 
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20160701/bab0ee84/attachment.sig>


More information about the freebsd-fs mailing list