Strange results of TP-Link WDR3600 wired ethernet performace test

Eugene Grosbein eugen at grosbein.net
Mon Oct 26 15:21:30 UTC 2015


Hi!

When I first got my TP-Link WDR3600, it has very old stock firmware without support
for hardware NAT acceleration. I made some wired ethernet performance tests using
that firmware. I have FreeBSD 10.2/amd64 desktop (4-core i7 @ 3.1Ghz) having
2 Intel gigabit ethernet ports em0/em1 to send and receive FTP traffic.

I use VNET and FIB kernel features to force kernel pass traffic from itself to itself
using physical ports and not loopback:

jail -c name=test vnet persist exec.fib=1
ifconfig em1 vnet test
etc.

I've verified that this Core i7 easily saturates 1Gbps link using
direct ethernet cable between em0 and em1. Then I've connected em1 to LAN port
of WDR3600 and em0 to its WAN port. WDR3600 had forwarded over 35 MBytes/s
from WAN to LAN (according to "systat -ifstat") with NAT disabled and
over 33 MBytes/s with NAT enabled (but not hardware NAT).

Then I've upgraded the device to latest stock firmware version
3.14.3 Build 150605 Rel.52210n that has hardware NAT acceleration support
and repeated tests. With NAT completely disabled, it forwarded over 33 MBytes/s.
With NAT enabled but hardware NAT acceleration disabled, it forwarded over 28 MBytes/s.
With hardware NAT acceleration enabled, it forwarded over 112 MBytes/s FTP traffic
saturating gigabit link.

Now I perform my first FreeBSD 11 performance test using same environment
and it forwards about 6MByte/s only while CPU load is less than 50%.
Here is "top -SHPI" report:

last pid:   628;  load averages:  0.85,  0.76,  0.69    up 0+00:44:39  22:02:55
48 processes:  2 running, 35 sleeping, 11 waiting
CPU:  1.6% user,  0.0% nice,  2.3% system, 42.2% interrupt, 53.9% idle
Mem: 5244K Active, 11M Inact, 8616K Wired, 496K Buf, 96M Free
Swap: 

  PID USERNAME PRI NICE   SIZE    RES STATE    TIME    WCPU COMMAND
   10 root     155 ki31     0K     8K RUN     40:50  53.07% idle
   11 root     -92    -     0K    88K WAIT     2:18  43.38% intr{int2 arge0}
    2 root     -16    -     0K     8K -        0:05   1.35% rand_harvestq
  628 root      40    0  7624K  2640K RUN      0:01   1.33% top
   11 root     -60    -     0K    88K WAIT     0:18   0.72% intr{swi4: clock (0
   11 root     -76    -     0K    88K WAIT     0:00   0.15% intr{swi0: uart}
   15 root     -16    -     0K     8K -        0:00   0.01% schedcpu
    5 root     -16    -     0K    16K psleep   0:00   0.00% pagedaemon{pagedaem
    8 root      -8    -     0K     8K -        0:00   0.00% bufspacedaemon
   14 root      -4    -     0K     8K vlruwt   0:00   0.00% vnlru
    9 root      16    -     0K     8K syncer   0:00   0.00% syncer
    7 root     -16    -     0K     8K psleep   0:00   0.00% bufdaemon

I use sysctl net.inet.ip.fastforwarding=1 and not packet filters compiled in/loaded.

The question is: why so many idle CPU cycles and why wired
ethernet performance is so bad?



More information about the freebsd-mips mailing list