HAST + ZFS + NFS + CARP

Ben RUBSON ben.rubson at gmail.com
Wed Aug 17 21:14:38 UTC 2016


> On 17 Aug 2016, at 20:03, Linda Kateley <lkateley at kateley.com> wrote:
> 
> RSF-1 runs in the zfs stack and send the writes to the second system.

Linda, do you have any link to a documentation about this RSF-1 operation mode ?

According to what I red about RSF-1, storage is shared between nodes, and RSF-1 manages the failover, we do not have 2 different storages.
(so I don't really understand how writes are sent to the "second system")

In addition, RSF-1 does not seem to help with long-distance replication to a different storage.
But I may be wrong ?
This is where ZFS send/receive helps.
Or even a nicer solution I proposed a few weeks ago : https://www.illumos.org/issues/7166 (but a lot of work to achieve).

Ben

> On 8/17/16 11:55 AM, Chris Watson wrote:
>> Of course, if you are willing to accept some amount of data loss that opens up a lot more options. :)
>> 
>> Some may find that acceptable though. Like turning off fsync with PostgreSQL to get much higher throughput. As little no as you are made *very* aware of the risks.
>> 
>> It's good to have input in this thread from one with more experience with RSF-1 than the rest of us. You confirm what others have that said about RSF-1, that it's stable and works well. What were you deploying it on?
>> 
>> Chris
>> 
>> Sent from my iPhone 5
>> 
>> On Aug 17, 2016, at 11:18 AM, Linda Kateley <lkateley at kateley.com <mailto:lkateley at kateley.com>> wrote:
>> 
>>> The question I always ask, as an architect, is "can you lose 1 minute worth of data?" If you can, then batched replication is perfect. If you can't.. then HA. Every place I have positioned it, rsf-1 has worked extremely well. If i remember right, it works at the dmu. I would suggest try it. They have been trying to have a full freebsd solution, I have several customers running it well.
>>> 
>>> linda
>>> 
>>> 
>>> On 8/17/16 4:52 AM, Julien Cigar wrote:
>>>> On Wed, Aug 17, 2016 at 11:05:46AM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>>> 
>>>>> Am 17.08.2016 um 10:54 schrieb Julien Cigar:
>>>>>> On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen Gotteswinter wrote:
>>>>>>> 
>>>>>>> Am 11.08.2016 um 11:24 schrieb Borja Marcos:
>>>>>>>>> On 11 Aug 2016, at 11:10, Julien Cigar <julien at perdition.city <mailto:julien at perdition.city>> wrote:
>>>>>>>>> 
>>>>>>>>> As I said in a previous post I tested the zfs send/receive approach (with
>>>>>>>>> zrep) and it works (more or less) perfectly.. so I concur in all what you
>>>>>>>>> said, especially about off-site replicate and synchronous replication.
>>>>>>>>> 
>>>>>>>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment,
>>>>>>>>> I'm in the early tests, haven't done any heavy writes yet, but ATM it
>>>>>>>>> works as expected, I havent' managed to corrupt the zpool.
>>>>>>>> I must be too old school, but I don’t quite like the idea of using an essentially unreliable transport
>>>>>>>> (Ethernet) for low-level filesystem operations.
>>>>>>>> 
>>>>>>>> In case something went wrong, that approach could risk corrupting a pool. Although, frankly,
>>>>>>>> ZFS is extremely resilient. One of mine even survived a SAS HBA problem that caused some
>>>>>>>> silent corruption.
>>>>>>> try dual split import :D i mean, zpool -f import on 2 machines hooked up
>>>>>>> to the same disk chassis.
>>>>>> Yes this is the first thing on the list to avoid .. :)
>>>>>> 
>>>>>> I'm still busy to test the whole setup here, including the
>>>>>> MASTER -> BACKUP failover script (CARP), but I think you can prevent
>>>>>> that thanks to:
>>>>>> 
>>>>>> - As long as ctld is running on the BACKUP the disks are locked
>>>>>> and you can't import the pool (even with -f) for ex (filer2 is the
>>>>>> BACKUP):
>>>>>> https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f
>>>>>> 
>>>>>> - The shared pool should not be mounted at boot, and you should ensure
>>>>>> that the failover script is not executed during boot time too: this is
>>>>>> to handle the case wherein both machines turn off and/or re-ignite at
>>>>>> the same time. Indeed, the CARP interface can "flip" it's status if both
>>>>>> machines are powered on at the same time, for ex:
>>>>>> https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf and
>>>>>> you will have a split-brain scenario
>>>>>> 
>>>>>> - Sometimes you'll need to reboot the MASTER for some $reasons
>>>>>> (freebsd-update, etc) and the MASTER -> BACKUP switch should not
>>>>>> happen, this can be handled with a trigger file or something like that
>>>>>> 
>>>>>> - I've still have to check if the order is OK, but I think that as long
>>>>>> as you shutdown the replication interface and that you adapt the
>>>>>> advskew (including the config file) of the CARP interface before the
>>>>>> zpool import -f in the failover script you can be relatively confident
>>>>>> that nothing will be written on the iSCSI targets
>>>>>> 
>>>>>> - A zpool scrub should be run at regular intervals
>>>>>> 
>>>>>> This is my MASTER -> BACKUP CARP script ATM
>>>>>> https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7
>>>>>> 
>>>>>> Julien
>>>>>> 
>>>>> 100€ question without detailed looking at that script. yes from a first
>>>>> view its super simple, but: why are solutions like rsf-1 such more
>>>>> powerful / featurerich. Theres a reason for, which is that they try to
>>>>> cover every possible situation (which makes more than sense for this).
>>>> I've never used "rsf-1" so I can't say much more about it, but I have
>>>> no doubts about it's ability to handle "complex situations", where
>>>> multiple nodes / networks are involved.
>>>> 
>>>>> That script works for sure, within very limited cases imho
>>>>> 
>>>>>>> kaboom, really ugly kaboom. thats what is very likely to happen sooner
>>>>>>> or later especially when it comes to homegrown automatism solutions.
>>>>>>> even the commercial parts where much more time/work goes into such
>>>>>>> solutions fail in a regular manner
>>>>>>> 
>>>>>>>> The advantage of ZFS send/receive of datasets is, however, that you can consider it
>>>>>>>> essentially atomic. A transport corruption should not cause trouble (apart from a failed
>>>>>>>> "zfs receive") and with snapshot retention you can even roll back. You can’t roll back
>>>>>>>> zpool replications :)
>>>>>>>> 
>>>>>>>> ZFS receive does a lot of sanity checks as well. As long as your zfs receive doesn’t involve a rollback
>>>>>>>> to the latest snapshot, it won’t destroy anything by mistake. Just make sure that your replica datasets
>>>>>>>> aren’t mounted and zfs receive won’t complain.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Cheers,
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Borja.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> _______________________________________________
>>>>>>>> freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org> mailing list
>>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org <mailto:freebsd-fs-unsubscribe at freebsd.org>"
>>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org> mailing list
>>>>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org <mailto:freebsd-fs-unsubscribe at freebsd.org>"
>>> 
>>> _______________________________________________
>>> freebsd-fs at freebsd.org <mailto:freebsd-fs at freebsd.org> mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org <mailto:freebsd-fs-unsubscribe at freebsd.org>"
> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list