NFS on 10G interface terribly slow

Gerrit Kühn gerrit.kuehn at aei.mpg.de
Fri Jun 26 09:56:17 UTC 2015


On Thu, 25 Jun 2015 12:56:36 -0700 Scott Larson <stl at wiredrive.com> wrote
about Re: NFS on 10G interface terribly slow:

SL>      We've got 10.0 and 10.1 servers accessing Isilon and Nexenta via
SL> NFS with Intel 10G gear and bursting to near wire speed with the stock
SL> MTU/rsize/wsize works as expected.

That sound promising. So we should be able to improve here, too.

SL> TSO definitely needs to be enabled for that performance.

Ok, I switched it back on.

SL> Other things to look at: Are all the servers involved negotiating the
SL> correct speed and duplex, with TSO?

We have a direct link between the systems, with only one switch in-between
acting as a transceiver to get from fibre to copper media. Both machines
and the switch show a 10G full-duplex link, not a single error or
collision to be spotted. The switch only carries these two lines, nothing
else.

SL> Does it need to have the network
SL> stack tuned with whatever it's equivalent of maxsockbuf and
SL> send/recvbuf are?

On the FreeBSD side we set

kern.ipc.maxsockbuf=33554432
net.inet.tcp.sendbuf_max=33554432
net.inet.tcp.recvbuf_max=33554432

I don't know what the equivalent for Solaris would be, still doing research
on that.

SL> Do the switch ports and NIC counters show any drops
SL> or errors?

No, nothing bad to be seen there.

SL> On the FBSD servers you could also run 'netstat -i -w 1'
SL> under load to see if drops are occurring locally, or 'systat -vmstat'
SL> for resource contention problems. But again, a similar setup here and
SL> no such issues have appeared.

No errors, no collisions, no drops.
I cannot spot any bottlenecks in netstat, either. One thing I just wonder
about is that all IRQs (about 700 under load) are routed to only one quque
on the ix interface (there seems to be one per core by default). Should the
load spread, or is that expected behaviour?


cu
  Gerrit


More information about the freebsd-net mailing list