HAST + ZFS + NFS + CARP
Ben RUBSON
ben.rubson at gmail.com
Sat Jul 2 15:04:25 UTC 2016
> On 30 Jun 2016, at 20:57, Julien Cigar <julien at perdition.city> wrote:
>
> On Thu, Jun 30, 2016 at 11:32:17AM -0500, Chris Watson wrote:
>>
>>
>> Sent from my iPhone 5
>>
>>>
>>>>
>>>> Yes that's another option, so a zpool with two mirrors (local +
>>>> exported iSCSI) ?
>>>
>>> Yes, you would then have a real time replication solution (as HAST), compared to ZFS send/receive which is not.
>>> Depends on what you need :)
>>>
>>>>
>>>>> ZFS would then know as soon as a disk is failing.
>>
>> So as an aside, but related, for those watching this from the peanut gallery and for the benefit of the OP perhaps those that run with this setup might give some best practices and tips here in this thread on making this a good reliable setup. I can see someone reading this thread and tossing two crappy Ethernet cards in a box and then complaining it doesn't work well.
>
> It would be more than welcome indeed..! I have the feeling that HAST
> isn't that much used (but maybe I am wrong) and it's difficult to find
> informations on it's reliability and concrete long-term use cases...
>
> Also the pros vs cons of HAST vs iSCSI
I made further testing today.
# serverA, serverB :
kern.iscsi.ping_timeout=5
kern.iscsi.iscsid_timeout=5
kern.iscsi.login_timeout=5
kern.iscsi.fail_on_disconnection=1
# Preparation :
- serverB : let's make 2 iSCSI targets : rem3, rem4.
- serverB : let's start ctld.
- serverA : let's create a mirror pool made of 4 disks : loc1, loc2, rem3, rem4.
- serverA : pool is healthy.
# Test 1 :
- serverA : put a lot of data into the pool ;
- serverB : stop ctld ;
- serverA : put a lot of data into the pool ;
- serverB : start ctld ;
- serverA : make all pool disks online : it works, pool is healthy.
# Test 2 :
- serverA : put a lot of data into the pool ;
- serverA : export the pool ;
- serverB : import the pool : it does not work, as ctld locks the disks ! Good news, nice protection (both servers won't be able to access the same disks at the same time).
- serverB : stop ctld ;
- serverB : import the pool : it works, 2 disks missing ;
- serverA : let's make 2 iSCSI targets : rem1, rem2 ;
- serverB : make all pool disks online : it works, pool is healthy.
# Test 3 :
- serverA : put a lot of data into the pool ;
- serverB : stop ctld ;
- serverA : put a lot of data into the pool ;
- serverB : import the pool : it works, 2 disks missing ;
- serverA : let's make 2 iSCSI targets : rem1, rem2 ;
- serverB : make all pool disks online : it works, pool is healthy, but of course data written at step3 is lost.
# Test 4 :
- serverA : put a lot of data into the pool ;
- serverB : stop ctld ;
- serverA : put a lot of data into the pool ;
- serverA : export the pool ;
- serverA : let's make 2 iSCSI targets : rem1, rem2 ;
- serverB : import the pool : it works, pool is healthy, data written at step3 is here.
# Test 5 :
- serverA : rsync a huge remote repo into the pool in the background ;
- serverB : stop ctld ;
- serverA : 2 disks missing, but rsync still runs flawlessly ;
- serverB : start ctld ;
- serverA : make all pool disks online : it works, pool is healthy.
- serverB : ifconfig <replication_interface> down ;
- serverA : 2 disks missing, but rsync still runs flawlessly ;
- serverB : ifconfig <replication_interface> up ;
- serverA : make all pool disks online : it works, pool is healthy.
- serverB : power reset !
- serverA : 2 disks missing, but rsync still runs flawlessly ;
- serverB : let's wait for server to be up ;
- serverA : make all pool disks online : it works, pool is healthy.
Quite happy with these tests actually :)
Ben
More information about the freebsd-fs
mailing list