Terrible NFS performance under 9.2-RELEASE?

Rick Macklem rmacklem at uoguelph.ca
Fri Jan 31 03:37:24 UTC 2014


J David wrote:
> On Wed, Jan 29, 2014 at 10:31 PM, Rick Macklem <rmacklem at uoguelph.ca>
> wrote:
> >> I've been busy the last few days, and won't be able to get to any
> >> code
> >> until the weekend.
> 
> Is there likely to be more to it than just cranking the MAX_TX_SEGS
> value and recompiling?  If so, is it something I could take on?
> 
> > Well, NFS hands TCP a list of 34 mbufs. If TCP obly adds one, then
> > increasing it from 34 to 35 would be all it takes. However, see
> > below.
> 
> One thing I don't want to miss here is that an NFS block size of
> 65,536 is really suboptimal.  The largest size of a TCP datagram is
> 65535.  So by the time NFS adds the overhead on and the total amount
> of data to be sent winds up in that ~65k range, it guarantees that
> the
> operation has to be split it into at least two TCP packets, one
> max-size and one tiny one.  This doubles a lot of the network stack
> overhead, regardless of whether the packet ends up being segmented
> into tiny bits down the road or not.
> 
> If NFS could be modified to respect the actual size of a TCP packet,
> generating a steady stream of 63.9k (or thereabout) writes instead of
> the current 64k-1k-64k-1k, performance would likely see another
> significant boost.  This would nearly double the average throughput
> per packet, which would help with network latency and CPU load.
> 
> It's also not 100% clear but it seems like in some cases the existing
> behavior also causes the TCP stack to park on the "leftover" bit and
> wait for more data, which comes in another >64k chunk, and from there
> on out there's no more correlation between TCP packets and NFS
> operations, so an operation doesn't begin on a packet boundary.  That
> continues as long as load keeps up.  That's probably not good for
> performance either.  And it certainly confuses the heck out of
> tcpdump.
> 
> Probably 60k would be the next most reasonable size, since it's the
> largest page size multiple that will fit into a TCP packet while
> still
> leaving room for overhead.
> 
> Since the max size of TCP packets is not an area where there's really
> any flexibility, what would have to happen to NFS to make that (or
> arbitrary values) perform at its best within that constraint?
> 
> It's apparent from even trivial testing that performance is
> dramatically affected if the "use a power of two for NFS rsize/wsize"
> recommendation isn't followed, but what is the origin of that?  Is it
> something that could be changed?
> 
> > I don't think that m_collapse() is more likely to fail, since it
> > only copies data to the previous mbuf when the entire mbuf that
> > follows will fit and it's allowed. I'd assume that a ref count
> > copied mbuf cluster doesn't allow this copy or things would be
> > badly broken.)
> 
> m_collapse checks M_WRITEABLE which appears to cover the ref count
> case.  (It's a dense macro, but it seems to require a ref count of 1
> if a cluster is used.)
> 
> The cases where m_collapse can succeed are pretty slim.  It pretty
> much requires two consecutive underutilizied buffers, which probably
> explains why it fails so often in this code path.  Since one of its
> two methods outright skips the packet header mbuf (to avoid risk of
> moving it), possibly the only case where it succeeds is when the last
> data mbuf is short enough that whatever NFS trailers are being
> appended can fit with it.
> 
Btw, in the previous post I agreed "in general". For this specific
case of the 64K NFS read reply/write request the first two mbufs
don't have much data in them. The first is the Sun RPC header generated
by the krpc and the 2nd is the first part of the NFS args that preceeds
the data. As such, I suspect that m_collapse() will often succeed in
copying the 2nd mbuf's data into the first and reducing the mbuf count
to 33. (You could find out by adding a counter for calls to m_collapse()
and testing 64K without my patch.

rick

> > Bottom line, I think calling either m_collapse() or m_defrag()
> > should be considered a "last resort".
> 
> It definitely seems more designed for a case where 8 different stack
> layers each put their own little header/trailer fingerprint on the
> packet, and that's not what's happening here.
> 
> > Maybe the driver could reduce the size of if_hw_tsomax whenever
> > it finds it needs to call one of these functions, to try and avoid
> > a re-occurrence?
> 
> Since the issue is one of segment length rather than packet length,
> this seems risky.  If one of those touched-by-everybody packets goes
> by, it may not be that large, but it would risk permanently (until
> reboot) dropping the throughput of that interface.
> 
> Thanks!
> 


More information about the freebsd-net mailing list