HAST + ZFS + NFS + CARP

Chris Watson bsdunix44 at gmail.com
Fri Jul 1 16:55:02 UTC 2016


Hi Gary!

So I'll add another voice to the KISS camp. I'd rather have two boxes each with two NICs attached to each other doing zfs replication from A to B. Adding more redundant hardware just adds more points of failure. NICs have no moving parts so as long as they are thermally controlled they won't fail. This is simple and as safe as you can get. As for how to handle an actual failover is really like to try out the ctl-ha option. Maybe this weekend. 

Sent from my iPhone 5

> On Jul 1, 2016, at 10:46 AM, Gary Palmer <gpalmer at freebsd.org> wrote:
> 
>> On Fri, Jul 01, 2016 at 05:11:47PM +0200, Julien Cigar wrote:
>>> On Fri, Jul 01, 2016 at 04:44:24PM +0200, InterNetX - Juergen Gotteswinter wrote:
>>> dont get me wrong, what i try to say is that imho you are trying to
>>> reach something which looks great until something goes wrong.
>> 
>> I agree..! :)
>> 
>>> 
>>> keep it simple, stupid simple, without much moving parts and avoid
>>> automagic voodoo wherever possible.
>> 
>> to be honnest I've always been relunctant to "automatic failover", as I
>> think the problem is always not "how" to do it but "when".. and as Rick
>> said "The simpler/reliable way would be done manually be a sysadmin"..
> 
> I agree.  They can verify that the situation needs a fail over much better
> than any script.  In a previous job I heard of a setup where the cluster
> manager software on the standby node decided that the active node was
> down so it did a force takeover of the disks.  Since the active node was
> still up it somehow managed to wipe out the partition tables on the disks
> along with the vxvm configuration (Veritas Volume Manager) inside the
> partitions.
> 
> They were restoring the partition tables and vxvm config from backups.
> From what I remember the backups were printouts, which made it slow going
> as they had to be re-entered by hand.  The system probably had dozens
> of disks (I don't know, but I know what role it was serving so I can
> guess)
> 
> I'd rather not see that happen ever again
> 
> (this was 15+ years ago FWIW, but the lesson is still applicable today)
> 
> Gary
> 
>> 
>>>> Am 01.07.2016 um 16:41 schrieb InterNetX - Juergen Gotteswinter:
>>>>> Am 01.07.2016 um 16:39 schrieb Julien Cigar:
>>>>>> On Fri, Jul 01, 2016 at 03:44:36PM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>>>> 
>>>>>> 
>>>>>>> Am 01.07.2016 um 15:18 schrieb Joe Love:
>>>>>>> 
>>>>>>>> On Jul 1, 2016, at 6:09 AM, InterNetX - Juergen Gotteswinter <jg at internetx.com> wrote:
>>>>>>>> 
>>>>>>>> Am 01.07.2016 um 12:57 schrieb Julien Cigar:
>>>>>>>>> On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>>>>>>> 
>>>>>>>>> of course I'll test everything properly :) I don't have the hardware yet
>>>>>>>>> so ATM I'm just looking for all the possible "candidates", and I'm 
>>>>>>>>> aware that a redundant storage is not that easy to implement ...
>>>>>>>>> 
>>>>>>>>> but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI), 
>>>>>>>>> either zfs send|ssh zfs receive as you suggest (but it's
>>>>>>>>> not realtime), either a distributed FS (which I avoid like the plague..)
>>>>>>>> 
>>>>>>>> zfs send/receive can be nearly realtime.
>>>>>>>> 
>>>>>>>> external jbods with cross cabled sas + commercial cluster solution like
>>>>>>>> rsf-1. anything else is a fragile construction which begs for desaster.
>>>>>>> 
>>>>>>> This sounds similar to the CTL-HA code that went in last year, for which I haven???t seen any sort of how-to.  The RSF-1 stuff sounds like it has more scaling options, though.  Which it probably should, given its commercial operation.
>>>>>> 
>>>>>> rsf is what pacemaker / heartbeat tries to be, judge me for linking
>>>>>> whitepapers but in this case its not such evil marketing blah
>>>>>> 
>>>>>> http://www.high-availability.com/wp-content/uploads/2013/01/RSF-1-HA-PLUGIN-ZFS-STORAGE-CLUSTER.pdf
>>>>>> 
>>>>>> 
>>>>>> @ Julien
>>>>>> 
>>>>>> seems like you take availability really serious, so i guess you also got
>>>>>> plans how to accomplish network problems like dead switches, flaky
>>>>>> cables and so on.
>>>>>> 
>>>>>> like using multiple network cards in the boxes, cross cabling between
>>>>>> the hosts (rs232 and ethernet of course, using proved reliable network
>>>>>> switches in a stacked configuration for example cisco 3750 stacked). not
>>>>>> to forget redundant power feeds to redundant power supplies.
>>>>> 
>>>>> the only thing that is not redundant (yet?) is our switch, an HP Pro 
>>>>> Curve 2530-24G) .. it's the next step :)
>>>> 
>>>> Arubas, okay, a quick view in the spec sheet does not seem to list
>>>> stacking option.
>>>> 
>>>> what about power?
>>>> 
>>>>> 
>>>>>> 
>>>>>> if not, i whould start again from scratch.
>>>>>> 
>>>>>>> 
>>>>>>> -Joe
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> freebsd-fs at freebsd.org mailing list
>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>>>> _______________________________________________
>>>>>> freebsd-fs at freebsd.org mailing list
>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>> 
>>>> _______________________________________________
>>>> freebsd-fs at freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> 
>> -- 
>> Julien Cigar
>> Belgian Biodiversity Platform (http://www.biodiversity.be)
>> PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
>> No trees were killed in the creation of this message.
>> However, many electrons were terribly inconvenienced.
> 
> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"


More information about the freebsd-fs mailing list