NFS Performance issue against NetApp

Marc Fournier mfournier at sd63.bc.ca
Thu Apr 25 18:46:19 UTC 2013


On 2013-04-25, at 11:43 , Marc G. Fournier <scrappy at hub.org> wrote:

> 
> On 2013-04-24, at 17:36 , Rick Macklem <rmacklem at uoguelph.ca> wrote:
> 
>> If you explicitly set rsize=N,wsize=N on a mount, those sizes will be
>> used unless they are greater than min(MAXBSIZE, server-max). MAXBSIZE is
>> the limit for the client side buffer cache and server-max is whatever
>> the server says is its max, so the client never uses a value greater than
>> that.
> 
> Just got my new Intel card in, so starting to play with it … one thing I didn't notice yesterday when I ran nfsstat -m:
> 
> nfsv3,tcp,resvport,soft,intr,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=32768,readahead=1,wcommitsize=5175966,timeout=120,retrans=2
> 
> Earlier in this thread, it was recommended to change to 32k … and Jeremy Chadwick thought it defaulted to 8k …
> 
> My fstab entry right now is simply:
> 
> 192.168.1.1:/vol/vm     /vm             nfs     rw,intr,soft    0       0
> 
> so I'm not changing rsize/wsize anywhere … did those defaults get raised recently and nobody noticed?  or does it make sense to reduce from 64k -> 32k to get better performance?
> 
> Again, this is using a FreeBSD client to mount from a NetApp file server …


Just checked on the Linux box (old, being phased out) and when I do an hfs mount, it too is set to 64k rsize/wsize:

Flags:	rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.5.253.130,mountvers=3,mountproto=tcp





More information about the freebsd-fs mailing list