[High Interrupt Count] Networking Difficulties
pyunyh at gmail.com
Tue Nov 1 17:42:18 UTC 2011
On Tue, Nov 01, 2011 at 12:16:37AM -0500, Paul A. Procacci wrote:
> On Mon, Oct 31, 2011 at 08:57:46PM -0500, Paul A. Procacci wrote:
> > Gents,
> > I'm having quite an aweful problem that I need a bit of help with.
> > I have an HPDL360 G3 ( http://h18000.www1.hp.com/products/quickspecs/11504_na/11504_na.HTML ) which acts as a NAT (via PF) for several (600+) class C's amongst 24+ machines sitting behind it.
> > It's running FPSense (FreeBSD 8.1-RELEASE-p4).
> > The important guts are:
> > 2 x 2.8 GHz Cpus
> > 2 BGE interfaces on a PCI-X bus.
> > During peak times this machine is only able to handle between 500Mbps - 600Mbps before running out of cpu capacity. (300Mbps(ish) on the LAN, 300Mbps(ish) on the WAN) It's due to the high number of interrupts.
> > I was speaking with a networking engineer here and he mentioned that I should look at "Interrupt Coalescing" to increase throughput.
> > The only information I found online regarding this was a post from 2 years ago here: http://lists.freebsd.org/pipermail/freebsd-net/2009-June/022227.html
> > The tunables mentioned in the above post aren't present in my system, so I imagine this never made it into the bge driver. Assuming this to be the case, I started looking at DEVICE_POLLING as a solution.
> > I did try implementing device polling, but the results were worse than I expected. netisr was using 100% of a single cpu while the other cpu remained mostly idle.
> > Not knowing exactly what netisr is, I reverted the changes.
> > This leads me to this list. Given the scenario above, I'm nearly certain I need to use device polling instead of the standard interrupt driven setup.
> > The two sysctl's that I've come across thus far that I think are what I need are:
> > net.isr.maxthreads
> > hern.hz
> > I would assume setting net.isr.maxthreads to 2 given my dual core machine is advisable, but I'm not 100% sure.
> > What are the caveats in setting this higher? Given the output of `sysctl -d net.isr.maxthreads` I would expect anything higher than the number of cores to be detrimental. Is this correct?
> > kern.hz I'm more unsure of. I understand what the sysctl is, but I'm not sure how to come up with a reasonable number.
> > Generally speaking, and in your experience, would a setting of 2000 achive close to the theoritical meximum of the cards? Is there an upper limit that I would be worried about?
> > Random Question:
> > - is device polling really the answer? I am missing something in the bge driver that I've overlooked?
> > - what tunables directly effect processing high volumes of packets.
> After some more coffee, and source code reading, I've now learned that having device polling enabled forces netisr to limit the number of threads it creates to 1.
> This kinda defeats the purpose of enabling device polling. This makes me believe that device polling isn't going to be a great solution afterall.
> A snippet from dmesg:
> bge0: <Compaq NC7781 Gigabit Server Adapter, ASIC rev. 0x001002> mem 0xf7ef0000-0xf7efffff irq 30 at device 2.0 on pci1
> brgphy0: <BCM5703 10/100/1000baseTX PHY> PHY 1 on miibus0
> bge1: <Compaq NC7781 Gigabit Server Adapter, ASIC rev. 0x001002> mem 0xf7ff0000-0xf7ffffff irq 29 at device 2.0 on pci4
> brgphy1: <BCM5703 10/100/1000baseTX PHY> PHY 1 on miibus1
> Any help/advice is appreciated, and sorry for following up to myself with this information.
In most cases there is *NO* need to use DEVICE_POLLING on advanced
controllers like bge(4).
How many interrupts do you see on your box?
Are you seeing more than 50K interrupts per second?
bge(4) already supports interrupt coalescing but its configuration
is not tunable yet. So you may have to patch driver to change that.
I guess there were a couple of fixes for BCM5703 that sits on PCI-X
bus since 8.1-RELEASE. Do you see similar problem on 8.2-RELEASE?
More information about the freebsd-net