Too much interrupts on ixgbe
Sergey Saley
sergeysaley at gmail.com
Tue May 8 14:10:00 UTC 2012
25.10.2011 11:21, Sergey Saley ???????:
> Jack Vogel wrote:
>> On Tue, Oct 25, 2011 at 12:22 AM, Sergey Saley<sergeysaley@>wrote:
>>
>>> Ryan Stone-2 wrote:
>>>> On Mon, Oct 24, 2011 at 3:51 PM, Sergey Saley<sergeysaley@>
>>> wrote:
>>>>> MPD5, netgraph, pppoe.Types of traffic - any (customer traffic).
>>>>> Bying this card I counted on a 3-4G traffic at 3-4K pppoe sessions.
>>>>> It turned to 600-700Mbit/s, about 50K pps at 700-800 pppoe sessions.
>>>> PPPoE is your problem. The Intel cards can't load-balance PPPoE
>>>> traffic, so everything goes to one queue. It may be possible to write
>>>> a netgraph module to load-balance the traffic across your CPUs.
>>>>
>>> OK, thank You for explanation.
>>> And what about the large number of interrupts?
>>> As for me, it's too much...
>>> irq256: ix0:que 0 240536944 6132
>>> irq257: ix0:que 1 89090444 2271
>>> irq258: ix0:que 2 93222085 2376
>>> irq259: ix0:que 3 89435179 2280
>>> irq260: ix0:link 1 0
>>> irq261: ix1:que 0 269468769 6870
>>> irq262: ix1:que 1 110974 2
>>> irq263: ix1:que 2 434214 11
>>> irq264: ix1:que 3 112281 2
>>> irq265: ix1:link 1 0
>>>
>>>
>> How do you decide its 'too much' ? It may be that with your traffic you
>> end
>> up
>> not being able to use offloads, just thinking. Its not like the hardware
>> just "makes
>> it up", it interrupts on the last descriptor of a packet which has the RS
>> bit set.
>> With TSO you will get larger chunks of data and thus less interrupts but
>> your
>> traffic probably doesn't qualify for it.
>>
> It's easy. I have several servers with a similar task and load.
> About 30K pps, about 500-600M traffic, about 600-700 pppoe connections.
> One difference - em
> Here is a typical vmstat -i
>
> point06# vmstat -i
> interrupt total rate
> irq17: atapci0 6173367 0
> cpu0: timer 3904389748 465
> irq256: em0 3754877950 447
> irq257: em1 2962728160 352
> cpu2: timer 3904389720 465
> cpu1: timer 3904389720 465
> cpu3: timer 3904389721 465
> Total 22341338386 2661
>
> point05# vmstat -i
> interrupt total rate
> irq14: ata0 35 0
> irq19: atapci1 8323568 0
> cpu0: timer 3905440143 465
> irq256: em0 3870403571 461
> irq257: em1 1541695487 183
> cpu1: timer 3905439895 465
> cpu3: timer 3905439895 465
> cpu2: timer 3905439895 465
> Total 21042182489 2506
>
> point04# vmstat -i
> interrupt total rate
> irq19: atapci0 6047874 0
> cpu0: timer 3901683760 464
> irq256: em0 823774953 98
> irq257: em1 1340659093 159
> cpu1: timer 3901683730 464
> cpu2: timer 3901683730 464
> cpu3: timer 3901683730 464
> Total 17777216870 2117
>
>
BTW, maybe there is a possibility to make a traffic separation per
several queues by vlan tag?
That would be a partial solution...
More information about the freebsd-net
mailing list