HAST initial sync speed
Thomas Steen Rasmussen
thomas at gibfest.dk
Tue Aug 10 02:14:23 UTC 2010
On 08-08-2010 17:17, Thomas Steen Rasmussen wrote:
> On 06-08-2010 15:50, Pawel Jakub Dawidek wrote:
>> On Tue, Aug 03, 2010 at 11:31:58AM +0200, Thomas Rasmussen wrote:
>>
>>> Hello list,
>>>
>>> I finally got my ZFS/HAST setup up and running, or trying to at least.
>>> I am wondering how fast the initial HAST sync normally is - I created
>>> these 4 HAST providers yesterday on 4 146 gig drives, and they
still each
>>> have over 90 gigabytes 'dirty' today. The machines are powerful (dell
>>> r710) and are otherwise idle, and they are connected to the same
gigabit
>>> switch.
>>>
>>> I can supply details about any part of the configuration if needed,
but I
>>> just wanted to ask if you guys believe something is wrong here. I
can't help
>>> but think, if the initial sync takes 24+ hours, then if I ever need to
>>> replace one of the servers, I will be without redundancy until the new
>>> server reaches 0 'dirty' bytes, correct ?
>>>
>> Correct, but synchronizartion should take much, much less time.
>> Is dirty count actually decreasing?
>>
>>
> Hello,
>
> Yes it was decreasing steadily but very slowly. It finished between
thursday
> evening and friday morning, and the dirty count is now 0. All in all
it took over
> 72 hours. It was transferring around 20mbits while doing this.
However, if I
> copied a large file to the primary HAST node, it would use up a lot more
> bandwidth. It is like HAST was synchronizing the "empty space" with lower
> priority or something. Does that make any sense ? The servers are not in
> production so I can perform any testing needed. Thank you for your reply.
>
> Regards
>
> Thomas Steen Rasmussen
>
Hello again,
I just wanted to include the configs here for completeness:
/etc/hast.conf:
-----------------------------
resource hasthd4 {
local /dev/label/hd4
on server1 {
remote 192.168.0.15
}
on server2 {
remote 192.168.0.14
}
}
resource hasthd5 {
local /dev/label/hd5
on server1 {
remote 192.168.0.15
}
on server2 {
remote 192.168.0.14
}
}
resource hasthd6 {
local /dev/label/hd6
on server1 {
remote 192.168.0.15
}
on server2 {
remote 192.168.0.14
}
}
resource hasthd7 {
local /dev/label/hd7
on server1 {
remote 192.168.0.15
}
on server2 {
remote 192.168.0.14
}
}
-----------------------------
To create the setup I ran the following commands on both servers:
glabel label ssd0 /dev/mfid1
glabel label ssd1 /dev/mfid2
glabel label hd4 /dev/mfid3
glabel label hd5 /dev/mfid4
glabel label hd6 /dev/mfid5
glabel label hd7 /dev/mfid6
And on server2:
[root at server2 ~]# hastctl create hasthd4
[root at server2 ~]# hastctl create hasthd5
[root at server2 ~]# hastctl create hasthd6
[root at server2 ~]# hastctl create hasthd7
[root at server2 ~]# /etc/rc.d/hastd start
[root at server2 ~]# hastctl role secondary all
And on server1:
[root at server1 ~]# hastctl create hasthd4
[root at server1 ~]# hastctl create hasthd5
[root at server1 ~]# hastctl create hasthd6
[root at server1 ~]# hastctl create hasthd7
[root at server1 ~]# /etc/rc.d/hastd start
[root at server1 ~]# hastctl role primary all
This made the HAST devices appear on server1 under /dev/hast/
Then I created the ZFS filesystem on top, on server1:
zpool create hatank raidz2 /dev/hast/hasthd4 /dev/hast/hasthd5
/dev/hast/hasthd6 /dev/hast/hasthd7 cache /dev/label/ssd0 /dev/label/ssd1
This resulted in the following "hastctl status" output, on server1:
hasthd4:
role: primary
provname: hasthd4
localpath: /dev/label/hd4
extentsize: 2097152
keepdirty: 64
remoteaddr: 192.168.0.15
replication: memsync
status: complete
dirty: 146051956736 bytes
hasthd5:
role: primary
provname: hasthd5
localpath: /dev/label/hd5
extentsize: 2097152
keepdirty: 64
remoteaddr: 192.168.0.15
replication: memsync
status: complete
dirty: 146045665280 bytes
hasthd6:
role: primary
provname: hasthd6
localpath: /dev/label/hd6
extentsize: 2097152
keepdirty: 64
remoteaddr: 192.168.0.15
replication: memsync
status: complete
dirty: 146047762432 bytes
hasthd7:
role: primary
provname: hasthd7
localpath: /dev/label/hd7
extentsize: 2097152
keepdirty: 64
remoteaddr: 192.168.0.15
replication: memsync
status: complete
dirty: 146047762432 bytes
--------------------------------------------------
The problem again is simply that the initial synchronization
took way too long. If I copy a large file to the primary HAST
server now it syncs very quickly. I am open for any input, I
obviously can't really use HAST before this problem is solved.
Thank you again.
Thomas Steen Rasmussen
More information about the freebsd-fs
mailing list