NFS DRC size

Garrett Wollman wollman at freebsd.org
Sat Mar 9 01:52:46 UTC 2013


<<On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem <rmacklem at uoguelph.ca> said:

> The cached replies are copies of the mbuf list done via m_copym().
> As such, the clusters in these replies won't be free'd (ref cnt -> 0)
> until the cache is trimmed (nfsrv_trimcache() gets called after the
> TCP layer has received an ACK for receipt of the reply from the client).

I wonder if this bit is even working at all.  In my experience, the
size of the DRC quickly grows under load up to the maximum (or
actually, slightly beyond), and never drops much below that level.  On
my production server right now, "nfsstat -se" reports:

Server Info:
  Getattr   Setattr    Lookup  Readlink      Read     Write    Create    Remove
 13036780    359901   1723623      3420  36397693  12385668    346590    109984
   Rename      Link   Symlink     Mkdir     Rmdir   Readdir  RdirPlus    Access
    45173        16    116791     14192      1176        24  12876747   3398533
    Mknod    Fsstat    Fsinfo  PathConf    Commit   LookupP   SetClId SetClIdCf
        0      2703     14992      7502   1329196         0         1         1
     Open  OpenAttr OpenDwnGr  OpenCfrm DelePurge   DeleRet     GetFH      Lock
   263034         0         0    263019         0         0    545104         0
    LockT     LockU     Close    Verify   NVerify     PutFH  PutPubFH PutRootFH
        0         0    263012         0         0  23753375         0         1
    Renew RestoreFH    SaveFH   Secinfo RelLckOwn  V4Create
        2    263006    263033         0         0         0
Server:
Retfailed    Faults   Clients
        0         0         1
OpenOwner     Opens LockOwner     Locks    Delegs 
       56        10         0         0         0 
Server Cache Stats:
   Inprog      Idem  Non-idem    Misses CacheSize   TCPPeak
        0         0         0  81714128     60997     61017

It's only been up for about the last 24 hours.  Should I be setting
the size limit to something truly outrageous, like 200,000?  (I'd
definitely need to deal with the mbuf cluster issue then!)  The
average request rate over this time is about 1000/s, but that includes
several episodes of high-cpu spinning (which I resolved by increasing
the DRC limit).

Meanwhile, some relevant bits from sysctl:

vfs.nfsd.udphighwater: 500
vfs.nfsd.tcphighwater: 61000
vfs.nfsd.minthreads: 16
vfs.nfsd.maxthreads: 64
vfs.nfsd.threads: 64
vfs.nfsd.request_space_used: 1416
vfs.nfsd.request_space_used_highest: 4284672
vfs.nfsd.request_space_high: 47185920
vfs.nfsd.request_space_low: 31457280
vfs.nfsd.request_space_throttled: 0
vfs.nfsd.request_space_throttle_count: 0

(I'd actually like to put maxthreads back up at 256, which is where I
had it during testing, but I need to test that the jumbo-frames issue
is fixed first.  I did pre-production testing on a non-jumbo network.)

-GAWollman



More information about the freebsd-fs mailing list