NFS performance tuning?

Bruce Evans brde at optusnet.com.au
Fri Dec 5 08:50:18 PST 2008


On Fri, 5 Dec 2008, Xin LI wrote:

> Lev Serebryakov wrote:
>> Hello, Freebsd-net.
>>
>>   I have two systems (7-Stable), connected with gigabit link. iperf
>> shows 667 Mbits/sec on TCP and 600Mbit/s on UDP without any tuning.
>>
>>   But NFS gives me only 17Mb/s (~136 Mbit/s) on sequential read of
                              MB
>> very big files, and about 8-10Mb/s on "real" workloads.
>>
>>   Are here any guides how to tune NFS for performance?
>
> rsize/wsize?  I think the current default (8192) is too smal, perhaps
> 262144 would be a better choice.

The defaults of NFS_RSIZE/NFS_WSIZE (8192/8192) are only used for udp.
The default is NFS_MAXDATA (32768) for tcp.  This is large enough.

> What I usually use is:
>
> mount_nfs -3Tr 262144 -w 262144

262144 is not supported.  Size parameters larger than MAXBSIZE (65536)
are silently reduced to MAXBSIZE.

I use the defaults with tcp and IIRC 16K/16K with udp.  IIRC 32K/32K doesn't
work so well with udp, and there is a bug setting the defaults when
toggling -T (apparently, defaults are only set initially, so if you start
with udp and switch to tcp you keep the udp defaults for tcp, and vice
versa).  I use several optimizations for writing in the nfs server and
several bug fixes for reading and writing in vfs clustering.

Nfs latency is more of a problem than nfs throughput for some applications.
E.g., compiling can take several times longer on nfs mainly because
most of the time is spent waiting to reopen many include files many
times each (the data normally remains cached after the first used, but
each open() requires RPCs).  I tune networks for latency and use some
hacks to reduce the number of nfs RPCs, and compile with -j4 even on
UP systems to reduce the effects of nonzero latency.

Bruce


More information about the freebsd-net mailing list