HAST initial sync speed

Thomas Steen Rasmussen thomas at gibfest.dk
Tue Aug 17 22:10:08 UTC 2010


On 17-08-2010 00:10, Pawel Jakub Dawidek wrote:
> On Fri, Aug 13, 2010 at 12:16:30PM +0200, Thomas Steen Rasmussen wrote:
>    
>> Just a quick update, it is still working on syncing the one HAST
>> resource configured
>> the other day:
>>
>> [root at server1 ~]# date&&  hastctl status
>> Thu Aug 12 14:11:18 CEST 2010
>> hasthd4:
>>    role: primary
>>    provname: hasthd4
>>    localpath: /dev/label/hd4
>>    extentsize: 2097152
>>    keepdirty: 64
>>    remoteaddr: 192.168.0.15
>>    replication: memsync
>>    status: complete
>>    dirty: 102651396096 bytes
>> [root at server1 ~]# date&&  hastctl status
>> Fri Aug 13 09:48:06 CEST 2010
>> hasthd4:
>>    role: primary
>>    provname: hasthd4
>>    localpath: /dev/label/hd4
>>    extentsize: 2097152
>>    keepdirty: 64
>>    remoteaddr: 192.168.0.15
>>    replication: memsync
>>    status: complete
>>    dirty: 80425779200 bytes
>>
>> Just under 20 gigabytes in just under 20 hours.
>>
>> Any suggestions are appreciated,
>>      
> I'm sorry for the delay, I needed some time to prepare test environment.
> Currently I'm running synchronization between two HAST nodes connected
> with 1Gb link and it took 4 minutes 5 seconds to synchronize 16GB of
> data, so the speed was around 68MB/s.
>
> I was doing the test on memory-backed md(4) devices to exclude disks
> speed. Could you do similar test? You need to create md(4) devices this
> way on both nodes:
>
> 	# mdconfig -a -t malloc -s 16g -o compress
>
> The 'compress' option will make md(4) devices to consume no space when
> writting just zeros.
>
> My hast.conf looks like this:
>
> 	resource test {
> 		local /dev/md0
>
> 		on nodea {
> 			remote tcp4://10.0.0.1
> 		}
> 		on nodeb {
> 			remote tcp4://10.0.0.2
> 		}
> 	}
>
> This will help us to tell how fast is your network. You can observe
> speed with gstat(8).
>
>    
Hello,

I performed the tests with md devices like you asked. It seems like using
memory disks doesn't make any difference. It is still running at 2-300kBps
according to gstat. The network is plenty fast, like I mentioned in an
earlier mail, once the initial sync is done, I can reach almost wire speed,
over 100 megabytes per second.

Just for reference, an iperf test:
# iperf -c 192.168.0.15
------------------------------------------------------------
Client connecting to 192.168.0.15, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.14 port 64049 connected with 192.168.0.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.08 GBytes    929 Mbits/sec

Even scp is fast:
# scp FreeBSD-8.1-RELEASE-amd64-disc1.iso 192.168.0.15:/data/
FreeBSD-8.1-RELEASE-amd64-disc1.iso                                                                                         
100%  682MB  85.2MB/s   00:08

But HAST is humming along:
# while true; do date && hastctl status && sleep 60; done
Wed Aug 18 00:04:45 CEST 2010
mdtest:
   role: primary
   provname: mdtest
   localpath: /dev/md0
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 16909336576 bytes
Wed Aug 18 00:05:45 CEST 2010
mdtest:
   role: primary
   provname: mdtest
   localpath: /dev/md0
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 16890462208 bytes
Wed Aug 18 00:06:45 CEST 2010
mdtest:
   role: primary
   provname: mdtest
   localpath: /dev/md0
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 16869490688 bytes
Wed Aug 18 00:07:45 CEST 2010
mdtest:
   role: primary
   provname: mdtest
   localpath: /dev/md0
   extentsize: 2097152
   keepdirty: 64
   remoteaddr: 192.168.0.15
   replication: memsync
   status: complete
   dirty: 16850616320 bytes
^C
#

Thanks,

Thomas Steen Rasmussen


More information about the freebsd-fs mailing list