HAST + ZFS + NFS + CARP

Ben RUBSON ben.rubson at gmail.com
Fri Jul 1 15:02:30 UTC 2016


I think what we miss is some kind of this :
http://milek.blogspot.fr/2007/03/zfs-online-replication.html
http://www.compnect.net/?p=16461

Online replication built in ZFS would be awesome.

> On 01 Jul 2016, at 16:44, InterNetX - Juergen Gotteswinter <jg at internetx.com> wrote:
> 
> dont get me wrong, what i try to say is that imho you are trying to
> reach something which looks great until something goes wrong.
> 
> keep it simple, stupid simple, without much moving parts and avoid
> automagic voodoo wherever possible.
> 
> Am 01.07.2016 um 16:41 schrieb InterNetX - Juergen Gotteswinter:
>> Am 01.07.2016 um 16:39 schrieb Julien Cigar:
>>> On Fri, Jul 01, 2016 at 03:44:36PM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>> 
>>>> 
>>>> Am 01.07.2016 um 15:18 schrieb Joe Love:
>>>>> 
>>>>>> On Jul 1, 2016, at 6:09 AM, InterNetX - Juergen Gotteswinter <jg at internetx.com> wrote:
>>>>>> 
>>>>>> Am 01.07.2016 um 12:57 schrieb Julien Cigar:
>>>>>>> On Fri, Jul 01, 2016 at 12:18:39PM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>>>>> 
>>>>>>> of course I'll test everything properly :) I don't have the hardware yet
>>>>>>> so ATM I'm just looking for all the possible "candidates", and I'm 
>>>>>>> aware that a redundant storage is not that easy to implement ...
>>>>>>> 
>>>>>>> but what solutions do we have? It's either CARP + ZFS + (HAST|iSCSI), 
>>>>>>> either zfs send|ssh zfs receive as you suggest (but it's
>>>>>>> not realtime), either a distributed FS (which I avoid like the plague..)
>>>>>> 
>>>>>> zfs send/receive can be nearly realtime.
>>>>>> 
>>>>>> external jbods with cross cabled sas + commercial cluster solution like
>>>>>> rsf-1. anything else is a fragile construction which begs for desaster.
>>>>> 
>>>>> This sounds similar to the CTL-HA code that went in last year, for which I haven’t seen any sort of how-to.  The RSF-1 stuff sounds like it has more scaling options, though.  Which it probably should, given its commercial operation.
>>>> 
>>>> rsf is what pacemaker / heartbeat tries to be, judge me for linking
>>>> whitepapers but in this case its not such evil marketing blah
>>>> 
>>>> http://www.high-availability.com/wp-content/uploads/2013/01/RSF-1-HA-PLUGIN-ZFS-STORAGE-CLUSTER.pdf
>>>> 
>>>> 
>>>> @ Julien
>>>> 
>>>> seems like you take availability really serious, so i guess you also got
>>>> plans how to accomplish network problems like dead switches, flaky
>>>> cables and so on.
>>>> 
>>>> like using multiple network cards in the boxes, cross cabling between
>>>> the hosts (rs232 and ethernet of course, using proved reliable network
>>>> switches in a stacked configuration for example cisco 3750 stacked). not
>>>> to forget redundant power feeds to redundant power supplies.
>>> 
>>> the only thing that is not redundant (yet?) is our switch, an HP Pro 
>>> Curve 2530-24G) .. it's the next step :)
>> 
>> Arubas, okay, a quick view in the spec sheet does not seem to list
>> stacking option.
>> 
>> what about power?
>> 
>>> 
>>>> 
>>>> if not, i whould start again from scratch.
>>>> 
>>>>> 
>>>>> -Joe
>>>>> 
>>>>> _______________________________________________
>>>>> freebsd-fs at freebsd.org mailing list
>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>>> 
>>>> _______________________________________________
>>>> freebsd-fs at freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>> 
>> 
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list