Channel bonding.

Sean seancody at gmail.com
Fri Apr 22 08:32:16 PDT 2005


I've been experimenting with the idea of doing channel bonding as a 
means of improving the performance of some heavily used file servers.
Currently I am using a single Intel 1000MT interface on each file
server and it has rather lack luster performance.

I've set two ports of my switch to 'shared' (an Extreme 
BlackDiamond 6800) and am using an Intel 1000MT Dual Port for 
the bonding interfaces.

The performance increase with I see is marginally better than 
just the one interface (70MB/s [bonded] vs 60MB/s [single]) which 
is slightly disappointing.  I am using ifstat and iostat (for disk
throughput, 30MB/s on a 3ware 7500-12 yet again disappointing) to
monitor and a variant of tcpblast to generate traffic.  I'm using
4 other machines (on the same blade on the switch) to generate the
traffice to the bonded interface all are similar hardware with 
varrying versions of FreeBSD.  In order to get the numbers as high
as I have I've enabled polling (some stability issues being 
used under SMP).

Before I dropped everything and moved over to trying out ng_fec
I wanted to get a few opinions on other things I can check or try.  
These servers typically have anywhere between 20-100 clients reading 
and writing many large files as fast as they can.  So far the machines
only perform well when there are fewer than 20 clients.  The whole
point of the experiment is increase performance of our current
resources instead of buying more servers.  I really don't know 
what to expect (in terms of performance) from this but just based on 
the 'ratings' on the individual parts this machine is not preforming 
very well.

In case anyone has any ideas I've included the 'specs' of the hardware 
below.

Hardware:
	Dual Intel Xeon CPU 2.66GHz 
	Intel Server SE7501BR2 Motherboard
	2X 512 MB Registered ECC DDR RAM
	3ware 7500-12 (12x120GB, RAID-5)
	Intel PRO/1000 MT Dual Port (em0,1)
	Intel PRO/1000 MT (On board) (em2)
	
Switch:
	Extreme Black Diamond 6800
	Gigabit Blade: G24T^3 51052

Kernel:
FreeBSD phoenix 5.3-RELEASE FreeBSD 5.3-RELEASE #1: Wed Apr 20
13:33:09 CDT 2005    
root at phoenix.franticfilms.com:/usr/src/sys/i386/compile/SMP  i386

Channel Bonding commands used:
ifconfig em0 up
ifconfig em1 up
kldload ng_ether.ko
ngctl mkpeer em0: one2many upper one
ngctl connect em0: em0:upper lower many0
ngctl connect em1: em0:upper lower many1
echo  Allow em1 to xmit/recv em0 frames
ngctl msg em1: setpromisc 1
ngctl msg em1: setautosrc 0
ngctl msg em0:upper setconfig "{ xmitAlg=1 failAlg=1 enabledLinks=[ 1 1 ] }"
ifconfig em0 A.B.C.D netmask 255.255.255.0

Contents of /etc/sysctl.conf:
net.inet.tcp.inflight_enable=1
net.inet.tcp.sendspace=32767
net.inet.tcp.recvspace=32767
net.inet.tcp.delayed_ack=0
vfs.hirunningspace=10485760
vfs.lorunningspace=10485760
net.inet.tcp.local_slowstart_flightsize=32767
net.inet.tcp.rfc1323=1
kern.maxfilesperproc=2048
vfs.vmiodirenable=1
kern.ipc.somaxconn=4096
kern.maxfiles=65536
kern.polling.enable=1

-- 
Sean


More information about the freebsd-performance mailing list