Limits on jumbo mbuf cluster allocation

Rick Macklem rmacklem at uoguelph.ca
Tue Mar 19 15:15:15 UTC 2013


I wrote:
> Garrett Wollman wrote:
> > <<On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
> > <rmacklem at uoguelph.ca> said:
> >
> > > I've attached a patch that has assorted changes.
> >
> > So I've done some preliminary testing on a slightly modified form of
> > this patch, and it appears to have no major issues. However, I'm
> > still waiting for my user with 500 VMs to have enough free to be
> > able
> > to run some real stress tests for me.
> >
> > I was able to get about 2.5 Gbit/s throughput for a single streaming
> > client over local 10G interfaces with jumbo frames (through a single
> > switch and with LACP on both sides -- how well does lagg(4) interact
> > with TSO and checksum offload?) This is a little bit disappointing
> > (considering that the filesystem can do 14 Gbit/s locally) but still
> > pretty decent for one single-threaded client. This obviously does
> > not
> > implicate the DRC changes at all, but does suggest that there is
> > room
> > for more performance improvement. (In previous tests last year, I
> > was able to get a sustained 8 Gbit/s when using multiple clients.) I
> > also found that one of our 10G switches is reordering TCP segments
> > in
> > a way that causes poor performance.
> >
> If the server for this test isn't doing anything else yet, you could
> try a test run with a single nfsd thread and see if that improves
> performance.
> 
> ken@ emailed yesterday mentioning that out of order reads was
> resulting
> in poor performance related to ZFS and that a single nfsd thread
> improved
> that for his test.
> 
> Although a single nfsd thread isn't practical, it suggests that the
> nfsd
> thread affinity code that I had forgotten about and has never been
> ported
> to the new server, might be needed for this. (I'm not sure how to do
> the
> affinity stuff for NFSv4, but it should at least be easy to port the
> code
> so that it works for NFSv3 mounts.)
> 
Oh, and don't hesitate to play with the rsize and readahead options on
the client mount. It is not obvious what is an optimal setting for a
given LAN/server config. (I think the Linux client has a readahead option?)

rick

> rick
> ps: For a couple of years I had assumed that Isilon would be doing
> this,
> but they are no longer working on the FreeBSD NFS server, so the
> affinity stuff slipped through the cracks.
> 
> > I'll hopefully have some proper testing results later in the week.
> >
> > -GAWollman
> > _______________________________________________
> > freebsd-net at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to
> > "freebsd-net-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"


More information about the freebsd-net mailing list