hast vs ggate+gmirror sychrnoisation speed
Mikolaj Golub
to.my.trociny at gmail.com
Fri Oct 22 14:51:11 UTC 2010
On Thu, 21 Oct 2010 13:25:34 +0100 Pete French wrote:
PF> Well, I bit the bullet and moved to using hast - all went beautifully,
PF> and I migrated the pool with no downtime. The one thing I do notice,
PF> however, is that the synchronisation with hast is much slower
PF> than the older ggate+gmirror combination. It's about half the
PF> speed in fact.
PF> When I orginaly setup my ggate configuration I did a lot of tweaks to
PF> get the speed good - these copnsisted of expanding the send and
PF> receive space for the sockets using sysctl.conf, and then providing
PF> large buffers to ggate. Is there a way to control this with hast ?
PF> I still have the sysctls set (as the machines have not rebooted)
PF> but I cant see any options in hast.conf which are equivalent to the
PF> "-S 262144 -R 262144" which I use with ggate
PF> Any advice, or am I barking up the wrong tree here ?
Currently there are no options in hast.conf to change send and receive buffer
size. They are hardcoded in sbin/hastd/proto_tcp4.c:
val = 131072;
if (setsockopt(tctx->tc_fd, SOL_SOCKET, SO_SNDBUF, &val,
sizeof(val)) == -1) {
pjdlog_warning("Unable to set send buffer size on %s", addr);
}
val = 131072;
if (setsockopt(tctx->tc_fd, SOL_SOCKET, SO_RCVBUF, &val,
sizeof(val)) == -1) {
pjdlog_warning("Unable to set receive buffer size on %s", addr);
}
You could change the values and recompile hastd :-). It would be interesting
to know about the results of your experiment (if you do).
Also note there is another hardcoded value in sbin/hastd/proto_common.c
/* Maximum size of packet we want to use when sending data. */
#define MAX_SEND_SIZE 32768
that looks like might affect synchronization speed too. Previously we had 128kB
here but this has been changed to 32Kb because it was reported about slow
synchronization with MAX_SEND_SIZE=128kB.
http://svn.freebsd.org/viewvc/base?view=revision&revision=211452
I wonder couldn't slow synchronization with MAX_SEND_SIZE=131072 be due to
SO_SNDBUF/SO_RCVBUF be equal to this size? May be increasing
SO_SNDBUF/SO_RCVBUF we could reach better performance with
MAX_SEND_SIZE=128kB?
--
Mikolaj Golub
More information about the freebsd-stable
mailing list