Terrible ix performance

Outback Dingo outbackdingo at gmail.com
Tue Jul 2 15:04:04 UTC 2013


Ive got a high end storage server here, iperf shows decent network io

iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
------------------------------------------------------------
Client connecting to 10.0.96.1, TCP port 5001
TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
------------------------------------------------------------
[  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
[  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
[  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec


the card has a 3 meter twinax cable from cisco connected to it, going
through a fujitsu switch. We have tweaked various networking, and kernel
sysctls, however from a sftp and nfs session i cant get better then 100MBs
from a zpool with 8 mirrored vdevs. We also have an identical box that will
get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared
to reads only 1.4Gbs...

does anyone have an idea of what the bottle neck could be?? This is a
shared storage array with dual LSI controllers connected to 32 drives via
an enclosure, local dd and other tests show the zpool performs quite well.
however as soon as we introduce any type of protocol, sftp, samba, nfs
performance plummets. Im quite puzzled and have run out of ideas.

ix0 at pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet
ix1 at pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01
hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82599EB 10-Gigabit SFI/SFP+ Network Connection'
    class      = network
    subclass   = ethernet


More information about the freebsd-questions mailing list