NFS Performance issue against NetApp

Marc G. Fournier scrappy at hub.org
Thu Apr 25 18:44:02 UTC 2013


On 2013-04-24, at 17:36 , Rick Macklem <rmacklem at uoguelph.ca> wrote:

> If you explicitly set rsize=N,wsize=N on a mount, those sizes will be
> used unless they are greater than min(MAXBSIZE, server-max). MAXBSIZE is
> the limit for the client side buffer cache and server-max is whatever
> the server says is its max, so the client never uses a value greater than
> that.

Just got my new Intel card in, so starting to play with it … one thing I didn't notice yesterday when I ran nfsstat -m:

nfsv3,tcp,resvport,soft,intr,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=32768,readahead=1,wcommitsize=5175966,timeout=120,retrans=2

Earlier in this thread, it was recommended to change to 32k … and Jeremy Chadwick thought it defaulted to 8k …

My fstab entry right now is simply:

192.168.1.1:/vol/vm     /vm             nfs     rw,intr,soft    0       0

so I'm not changing rsize/wsize anywhere … did those defaults get raised recently and nobody noticed?  or does it make sense to reduce from 64k -> 32k to get better performance?

Again, this is using a FreeBSD client to mount from a NetApp file server ...


More information about the freebsd-fs mailing list