em network issues

John Polstra jdp at polstra.com
Thu Oct 19 22:24:37 UTC 2006


On 19-Oct-2006 Scott Long wrote:
> The performance measurements that Andre and I did early this year showed
> that the INTR_FAST handler provided a very large benefit.

I'm trying to understand why that's the case.  Is it because an
INTR_FAST interrupt doesn't have to be masked and unmasked in the
APIC?  I can't see any other reason for much of a performance
difference in that driver.  With or without INTR_FAST, you've got
the bulk of the work being done in a background thread -- either the
ithread or the taskqueue thread.  It's not clear to me that it's any
cheaper to run a task than it is to run an ithread.

A difference might show up if you had two or more em devices sharing
the same IRQ.  Then they'd share one ithread, but would each get their
own taskqueue thread.  But sharing an IRQ among multiple gigabit NICs
would be avoided by anyone who cared about performance, so it's not a
very interesting case.  Besides, when you first committed this
stuff, INTR_FAST interrupts were not sharable.

Another change you made in the same commit (if_em.c revision 1.98)
greatly reduced the number of PCI writes made to the RX ring consumer
pointer register.  That would yield a significant performance
improvement.  Did you see gains from INTR_FAST even without this
independent change?

John


More information about the freebsd-net mailing list