misc/145189: nfsd performs abysmally under load

Garrett Cooper yanefbsd at gmail.com
Tue Mar 30 20:50:08 UTC 2010

The following reply was made to PR misc/145189; it has been noted by GNATS.

From: Garrett Cooper <yanefbsd at gmail.com>
To: Bruce Evans <brde at optusnet.com.au>
Cc: Rich <rercola at acm.jhu.edu>, freebsd-bugs at freebsd.org, 
	freebsd-gnats-submit at freebsd.org
Subject: Re: misc/145189: nfsd performs abysmally under load
Date: Tue, 30 Mar 2010 13:44:05 -0700

 On Tue, Mar 30, 2010 at 1:11 PM, Bruce Evans <brde at optusnet.com.au> wrote:
 > On Tue, 30 Mar 2010, Rich wrote:
 >> On Tue, Mar 30, 2010 at 11:50 AM, Bruce Evans <brde at optusnet.com.au>
 >> wrote:
 >>>> For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with
 >>>> the
 >>>> following flags
 >>>> [rw,nosuid,hard,intr,nofsc,tcp,vers=3D3,rsize=3D8192,wsize=3D8192,slop=
 >>>> client, the above is the server), I achieve 2 MB/s, fluctuating betwee=
 >>>> 1
 >>>> and 3. (pv reports 2.23 MB/s avg)
 > I also tried various nfs r/w sizes and tcp/udp. =A0The best sizes are
 > probably the fs block size or twice that (normally 16K for ffs). =A0Old
 > versions of FreeBSD had even more bugs in this area and gave surprising
 > performance differences depending on the nfs r/w sizes or application
 > i/o sizes. =A0In some cases smaller sizes worked best, apparently because
 > they avoided the stalls.
 >>>> ...
 >>> Enabling polling is a good way to destroy latency. =A0A ping latency of
 >>> ...
 >> Actually, we noticed that throughput appeared to get marginally better
 >> while
 >> causing occasional bursts of crushing latency, but yes, we have it on in
 >> the
 >> kernel without using it in any actual NICs at present. :)
 >> But yes, I'm getting 40-90+ MB/s, occasionally slowing to 20-30 MB/s,
 >> average after copying a 6.5 GB file of 52.7 MB/s, on localhost IPv4,
 >> with no additional mount flags. {r,w}size=3D8192 on localhost goes up to
 >> 80-100 MB/s, with occasional sinks to 60 (average after copying
 >> another, separate 6.5 GB file: 77.3 MB/s).
 > I thought you said you often got 1-3MB/S.
 >> Also:
 >> 64 bytes from icmp_seq=3D0 ttl=3D64 time=3D0.015 ms
 >> 64 bytes from icmp_seq=3D1 ttl=3D64 time=3D0.049 ms
 >> 64 bytes from icmp_seq=3D2 ttl=3D64 time=3D0.012 ms
 > Fairly normal slowness for -current.
 >> 64 bytes from [actual IP]: icmp_seq=3D0 ttl=3D64 time=3D0.019 ms
 >> 64 bytes from [actual IP]: icmp_seq=3D1 ttl=3D64 time=3D0.015 ms
 > Are these with external hardware NICs? =A0Then 15 uS is excellent. =A0Bet=
 > than I've ever seen. =A0Very good hardware might be able to do this, but
 > I suspect it is for the local machine. =A0BTW, I don't like the times
 > been reported in ms and sub-uS times not being supported. =A0I sometimes
 > run Linux or cygwin ping and don't like it not supporting sub-mS times,
 > so that it always reports 0 for my average latency of 100 uS.
 >>> After various tuning and bug fixing (now partly committed by others) I
 >>> get
 >>> improvements like the following on low-end systems with ffs (I don't us=
 >>> zfs):
 >>> - very low end with 100Mbps ethernet: little change; bulk transfers
 >>> always
 >>> =A0went at near wire speed (about 10 MB/S)
 >>> - low end with 1Gbps/S: bulk transfers up from 20MB/S to 45MB/S (local
 >>> ffs
 >>> =A050MB/S). =A0buildworld over nfs of 5.2 world down from 1200 seconds =
 to 800
 >>> =A0seconds (this one is very latency-sensitive. =A0Takes about 750 seco=
 nds on
 >>> =A0local ffs).
 >> Is this on 9.0-CURRENT, or RELENG_8, or something else?
 > Mostly with 7-CURRENT or 8-CURRENT a couple of years ago. =A0Sometimes wi=
 > a ~5.2-SERVER. =A0nfs didn't vary much with the server, except there were
 > surprising differences due to latency that I never tracked down.
 > I forgot to mention another thing you can try easily:
 > - negative name caching. =A0Improves latency. =A0I used this to reduce ma=
 > =A0times significantly, and it is now standard in -current but not
 > =A0enabled by default.
     Have you also tried tuning via sysctl (vfs.nfs* ?)

More information about the freebsd-bugs mailing list