Slow performance in high latency situation on FreeNAS / FreeBSD 9

Adam Baxter voltagex at voltagex.org
Wed Feb 10 14:18:16 UTC 2016


Hi all,

I've got a new FreeNAS 9.3 box which is getting very, very slow
transfers once the latency of the remote host goes over 200ms.

The system is based on a SuperMicro A1SRi-2758F board - see
http://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2758F.cfm.
FreeNAS boots fine on it once you tell it to load the xhci driver on
boot.

uname -a says FreeBSD freenas.local 9.3-RELEASE-p31 FreeBSD
9.3-RELEASE-p31 #0 r288272+33bb475: Wed Feb  3 02:19:35 PST 2016
root at build3.ixsystems.com:/tank/home/stable-builds/FN/objs/os-base/amd64/tank/home/stable-builds/FN/FreeBSD/src/sys/FREENAS.amd64
 amd64

The network card is new to me, apparently it's an Intel i354 / C2000
integrated thing - there are 4 gigabit ports on the back of the
machine and a 5th for IPMI.

I realise I'm limiting myself by staying on an OS based on FreeBSD 9
but I don't feel confident enough with FreeBSD to jump to 10 yet.

Please let me know if I've missed any critical information out.

the iperf server was a Linux VM on a Windows host, VirtualBox bridged
interface to a Broadcom NIC.

0.5ms latency - standard LAN transfer to FreeNAS
[  3] local 10.1.1.2 port 40116 connected with 10.1.1.111 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.01 GBytes   871 Mbits/sec

Which looks fine-ish

The problem occurs when I crank up the latency (using tc qdisc on the
VM). This matches the transfer rates I see from remote hosts once the
latency hits 200-300ms (common for Australia->UK)

300ms simulated latency - Linux VM -> FreeNAS
[voltagex at freenas ~]$ iperf -c 10.1.1.111
------------------------------------------------------------
Client connecting to 10.1.1.111, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 10.1.1.2 port 33023 connected with 10.1.1.111 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.3 sec  3.75 MBytes  3.06 Mbits/sec


Whereas Linux VM -> Linux VM fares quite a lot better, even with the
added latency:
voltagex at devbox:~$ iperf -c 10.1.1.111
------------------------------------------------------------
Client connecting to 10.1.1.111, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.1.112 port 51790 connected with 10.1.1.111 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  23.5 MBytes  19.6 Mbits/sec


Cranking up the window size on FreeNAS/FreeBSD doesn't seem to help, either.
[voltagex at freenas ~]$ iperf -w 85k -c 10.1.1.111
------------------------------------------------------------
Client connecting to 10.1.1.111, TCP port 5001
TCP window size: 86.3 KByte (WARNING: requested 85.0 KByte)
------------------------------------------------------------
[  3] local 10.1.1.2 port 15033 connected with 10.1.1.111 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.6 sec  2.38 MBytes  1.88 Mbits/sec

I also tried booting the machine from a LiveCD of Ubuntu 15.10 - the
numbers are what you expect, except when capturing the no-latency test
with tshark, the throughput dropped to around 200 megabits.

With simulated latency:
ubuntu at ubuntu:~$ iperf -c 10.1.1.115
------------------------------------------------------------
Client connecting to 10.1.1.115, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.1.2 port 56184 connected with 10.1.1.115 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.4 sec  16.0 MBytes  12.9 Mbits/sec

Without simulated latency + tshark running:
[  3] local 10.1.1.2 port 56192 connected with 10.1.1.115 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   250 MBytes   209 Mbits/sec

"Normal" throughput in Ubuntu is about 730 megabits.


The info I can pull from the card itself:
igb0: <Intel(R) PRO/1000 Network Connection version - 2.4.0> port
0xe0c0-0xe0df mem 0xdf260000-0xdf27ffff,0xdf30c000-0xdf30ffff irq 20
at device 20.0 on pci0
igb0: Using MSIX interrupts with 9 vectors
igb0: Ethernet address: 0c:c4:7a:6b:bf:34
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
igb0: Bound queue 4 to cpu 4
igb0: Bound queue 5 to cpu 5
igb0: Bound queue 6 to cpu 6
igb0: Bound queue 7 to cpu 7
igb0: promiscuous mode enabled
igb0: link state changed to DOWN
igb0: link state changed to UP

igb0 at pci0:0:20:0:       class=0x020000 card=0x1f4115d9 chip=0x1f418086
rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    class      = network
    subclass   = ethernet

I am not running pf (yet) and running 'ifconfig igb0 -tso' seemed to
have no impact. I have not yet had a chance to try FreeBSD 10 in live
mode.

Packet captures are available at
http://static.voltagex.org/freebsd-troubleshooting/iperf.tar.xz in
pcapng format (unpacks to about 750MB, sorry!)

Thanks in advance,
Adam


More information about the freebsd-net mailing list