HAST + ZFS + NFS + CARP

InterNetX - Juergen Gotteswinter juergen.gotteswinter at internetx.com
Wed Aug 17 09:05:51 UTC 2016



Am 17.08.2016 um 10:54 schrieb Julien Cigar:
> On Wed, Aug 17, 2016 at 09:25:30AM +0200, InterNetX - Juergen Gotteswinter wrote:
>>
>>
>> Am 11.08.2016 um 11:24 schrieb Borja Marcos:
>>>
>>>> On 11 Aug 2016, at 11:10, Julien Cigar <julien at perdition.city> wrote:
>>>>
>>>> As I said in a previous post I tested the zfs send/receive approach (with
>>>> zrep) and it works (more or less) perfectly.. so I concur in all what you
>>>> said, especially about off-site replicate and synchronous replication.
>>>>
>>>> Out of curiosity I'm also testing a ZFS + iSCSI + CARP at the moment, 
>>>> I'm in the early tests, haven't done any heavy writes yet, but ATM it 
>>>> works as expected, I havent' managed to corrupt the zpool.
>>>
>>> I must be too old school, but I don’t quite like the idea of using an essentially unreliable transport
>>> (Ethernet) for low-level filesystem operations.
>>>
>>> In case something went wrong, that approach could risk corrupting a pool. Although, frankly,
>>> ZFS is extremely resilient. One of mine even survived a SAS HBA problem that caused some
>>> silent corruption.
>>
>> try dual split import :D i mean, zpool -f import on 2 machines hooked up
>> to the same disk chassis.
> 
> Yes this is the first thing on the list to avoid .. :)
> 
> I'm still busy to test the whole setup here, including the 
> MASTER -> BACKUP failover script (CARP), but I think you can prevent
> that thanks to:
> 
> - As long as ctld is running on the BACKUP the disks are locked 
> and you can't import the pool (even with -f) for ex (filer2 is the
> BACKUP):
> https://gist.github.com/silenius/f9536e081d473ba4fddd50f59c56b58f
> 
> - The shared pool should not be mounted at boot, and you should ensure
> that the failover script is not executed during boot time too: this is
> to handle the case wherein both machines turn off and/or re-ignite at
> the same time. Indeed, the CARP interface can "flip" it's status if both
> machines are powered on at the same time, for ex:
> https://gist.github.com/silenius/344c3e998a1889f988fdfc3ceba57aaf and
> you will have a split-brain scenario
> 
> - Sometimes you'll need to reboot the MASTER for some $reasons
> (freebsd-update, etc) and the MASTER -> BACKUP switch should not
> happen, this can be handled with a trigger file or something like that
> 
> - I've still have to check if the order is OK, but I think that as long
> as you shutdown the replication interface and that you adapt the
> advskew (including the config file) of the CARP interface before the 
> zpool import -f in the failover script you can be relatively confident 
> that nothing will be written on the iSCSI targets
> 
> - A zpool scrub should be run at regular intervals
> 
> This is my MASTER -> BACKUP CARP script ATM
> https://gist.github.com/silenius/7f6ee8030eb6b923affb655a259bfef7
> 
> Julien
> 

100€ question without detailed looking at that script. yes from a first
view its super simple, but: why are solutions like rsf-1 such more
powerful / featurerich. Theres a reason for, which is that they try to
cover every possible situation (which makes more than sense for this).

That script works for sure, within very limited cases imho

>>
>> kaboom, really ugly kaboom. thats what is very likely to happen sooner
>> or later especially when it comes to homegrown automatism solutions.
>> even the commercial parts where much more time/work goes into such
>> solutions fail in a regular manner
>>
>>>
>>> The advantage of ZFS send/receive of datasets is, however, that you can consider it
>>> essentially atomic. A transport corruption should not cause trouble (apart from a failed
>>> "zfs receive") and with snapshot retention you can even roll back. You can’t roll back
>>> zpool replications :)
>>>
>>> ZFS receive does a lot of sanity checks as well. As long as your zfs receive doesn’t involve a rollback
>>> to the latest snapshot, it won’t destroy anything by mistake. Just make sure that your replica datasets
>>> aren’t mounted and zfs receive won’t complain.
>>>
>>>
>>> Cheers,
>>>
>>>
>>>
>>>
>>> Borja.
>>>
>>>
>>>
>>> _______________________________________________
>>> freebsd-fs at freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
> 


More information about the freebsd-fs mailing list