em0 performance subpar

Adam Stylinski kungfujesus06 at gmail.com
Thu Apr 28 22:47:54 UTC 2011


On Thu, Apr 28, 2011 at 02:22:29PM -0700, Jack Vogel wrote:
> My validation engineer set things up on an 8.2 REL system, testing the
> equivalent of
> HEAD, and he reports performance is fine. This is without any tweaks from
> what's
> checked in.
> 
> Increasing the descriptors to 4K is way overkill and might actually cause
> problems,
> go back to default.
> 
> He has a Linux test client, what are you transmitting to?
> 
> Jack
> 
> 
> On Thu, Apr 28, 2011 at 11:00 AM, Adam Stylinski <kungfujesus06 at gmail.com>wrote:
> 
> > On Thu, Apr 28, 2011 at 09:52:14AM -0700, Jack Vogel wrote:
> > > Adam,
> > >
> > > The TX ring for the legacy driver is small right now compared to em, try
> > > this experiment,
> > > edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to 1024,
> > see
> > > what
> > > that does, then 2048.
> > >
> > > My real strategy with the legacy code was that it should stable, meaning
> > not
> > > getting
> > > a lot of changes... that really hasn't worked out over time. I suppose
> > I'll
> > > have to try and
> > > give it some tweaks and let you try it. The problem with this code is it
> > > technically supports
> > > a huge range of old stuff we don't test any more, things I do might cause
> > > other regressions :(
> > >
> > > Oh well, let me know if increasing the TX descriptors helps.
> > >
> > > Jack
> > Jack,
> >
> > Is this the same thing as adjusting these values?:
> >
> > hw.em.rxd=4096
> > hw.em.txd=4096
> >
> > If so I've maxed this out and it's not helping.  I'll give it a shot on my
> > 8-STABLE box as it has a kernel I can play with.
> >
> > Setting the MTU to 1500 gave lower throughput.
> >
> > --
> > Adam Stylinski
> > PGP Key: http://pohl.ececs.uc.edu/~adam/publickey.pub
> > Blog: http://technicallyliving.blogspot.com
> >

I am transmitting to a linux client (kernel 2.6.38, 9000 byte MTU, PCI-Ex based card).  My sysctl's on the Linux client (apart from the default) look like so:

net.ipv4.ip_forward = 0
# Enables source route verification
net.ipv4.conf.default.rp_filter = 1
# Enable reverse path
net.ipv4.conf.all.rp_filter = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216
net.core.wmem_default = 87380
net.core.rmem_default = 87380
net.ipv4.tcp_mem = 98304 131072 196608
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_window_scaling = 1
dev.rtc.max-user-freq = 1024

The exact troublesome device (as reported by pciconf): 

em0 at pci0:7:5:0: class=0x020000 card=0x13768086 chip=0x107c8086 rev=0x05 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Gigabit Ethernet Controller (Copper) rev 5 (82541PI)'
    class      = network
    subclass   = ethernet
Apart from bus saturation (which I don't suspect is the problem) I'm not sure what the issue could be.  What should I try next?

-- 
Adam Stylinski
PGP Key: http://pohl.ececs.uc.edu/~adam/publickey.pub
Blog: http://technicallyliving.blogspot.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-net/attachments/20110428/ff8dd5fc/attachment.pgp


More information about the freebsd-net mailing list