NFS Performance issue against NetApp

Marc G. Fournier scrappy at hub.org
Thu Apr 25 20:40:11 UTC 2013


On 2013-04-24, at 17:36 , Rick Macklem <rmacklem at uoguelph.ca> wrote:

>> 
> For the new client, it defaults to the min(MAXBSIZE, server-max), where
> server-max is whatever the server says is its maximum (also MAXBSIZE for
> the new server). I think the old server uses 32768.
> These numbers are for the default tcp mounts. Specify udp (or mntudp) and
> I think the default becomes 16384.
> 
> If you explicitly set rsize=N,wsize=N on a mount, those sizes will be
> used unless they are greater than min(MAXBSIZE, server-max). MAXBSIZE is
> the limit for the client side buffer cache and server-max is whatever
> the server says is its max, so the client never uses a value greater than
> that.
> 
> For readahead, the default is 1. This seems rather small to me and I think is
> in the "from the old days" category. You can set it to
> a larger value, although there is an ifdef'd upper limit, which is what
> you'll get if you specify a really large value for readahead. Admittedly,
> if you are using a large rsize,wsize on a low latency LAN, readahead=1
> may be sufficient.
> 
> As someone else noted, if you are using head or stable/9, "nfsstat -m"
> shows you what is actually being used, for the new client only.

'k, with the Intel card in (igb driver), I've tried various different mount options … 64k (default), 32k, 1/2/3 for read ahead … all tend to come ou taround the same 250s to run …  I'm getting drastic now, and am reformatting one of the other server (exact same config, but without the Intel for the first try) with Centos 6.4 … test out its nfsclient … see if I get similar results … 


More information about the freebsd-fs mailing list