Reducing number of interrupts from intel pro 1000 et adapter

Barney Cordoba barney_cordoba at yahoo.com
Wed Dec 2 14:33:59 UTC 2009



--- On Mon, 11/23/09, Yuriy A. Korobko <administrator at shtorm.com> wrote:

> From: Yuriy A. Korobko <administrator at shtorm.com>
> Subject: Reducing number of interrupts from intel pro 1000 et adapter
> To: freebsd-net at freebsd.org
> Date: Monday, November 23, 2009, 12:42 PM
> Hi,
> 
> I'd like to know a way to control tx interrupts on intel
> pro 1000 et
> adapter with igb driver. Just installed one in the router
> and systat
> shows 8-9k rx interrupts and 20k tx interrupts from igb0
> and igb1
> adapters. Box is a router running freebsd 7.2 release, I've
> tried
> default driver from kernel source and latest from intel
> site, effect is
> the same with automatic interrupt moderation enabled and
> disabled. I
> have the same box with intel pro 1000 pt adapter which
> have
> tx(rx)_int_delay sysctls in em driver, I was able to reduce
> number of
> tx/rx interrupts to 7-8k per interface and got much more
> cpu idle
> because of less context switches with same pps.
> 
> Interfacae load
> 
> border# netstat -I igb0 -w 1
>             input 
>        (igb0)     
>      output
>    packets  errs     
> bytes    packets  errs     
> bytes colls
>      41438 
>    0   37923274   
>   51173 
>    0   24539512 
>    0
>      44827 
>    0   41626876   
>   53408 
>    0   24595412 
>    0
>      43300 
>    0   39736056   
>   53118 
>    0   24574219 
>    0
>      43146 
>    0   40399285   
>   53455 
>    0   24368290 
>    0
>      44827 
>    0   42463307   
>   53921 
>    0   23959752 
>    0
> 
> Here is sysctls
> 
> dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection
> version - 1.7.4
> dev.igb.0.%driver: igb
> dev.igb.0.%location: slot=0 function=0
> dev.igb.0.%pnpinfo: vendor=0x8086 device=0x10c9
> subvendor=0x8086
> subdevice=0xa03c class=0x020000
> dev.igb.0.%parent: pci1
> dev.igb.0.debug: -1
> dev.igb.0.stats: -1
> dev.igb.0.flow_control: 0
> dev.igb.0.enable_aim: 1
> dev.igb.0.low_latency: 1000
> dev.igb.0.ave_latency: 4000
> dev.igb.0.bulk_latency: 8000
> dev.igb.0.rx_processing_limit: 1000
> 
> dev.igb.1.%desc: Intel(R) PRO/1000 Network Connection
> version - 1.7.4
> dev.igb.1.%driver: igb
> dev.igb.1.%location: slot=0 function=1
> dev.igb.1.%pnpinfo: vendor=0x8086 device=0x10c9
> subvendor=0x8086
> subdevice=0xa03c class=0x020000
> dev.igb.1.%parent: pci1
> dev.igb.1.debug: -1
> dev.igb.1.stats: -1
> dev.igb.1.flow_control: 0
> dev.igb.1.enable_aim: 1
> dev.igb.1.low_latency: 1000
> dev.igb.1.ave_latency: 4000
> dev.igb.1.bulk_latency: 8000
> dev.igb.1.rx_processing_limit: 1000
> 
> And debug 
> 
>  kernel: igb0: Adapter hardware address = 0xc796ec1c 
>  kernel: igb0: CTRL = 0x40c00241 RCTL = 0x48002 
>  kernel: igb0: Packet buffer = Tx=0k Rx=0k 
>  kernel: igb0: Flow control watermarks high = 63488 low =
> 61988
>  kernel: igb0: Queue(0) tdh = 3023, tdt = 3025
>  kernel: igb0: TX(0) no descriptors avail event = 0
>  kernel: igb0: TX(0) MSIX IRQ Handled = 3754097484
>  kernel: igb0: TX(0) Packets sent = 4815628967
>  kernel: igb0: Queue(0) rdh = 3658, rdt = 3645
>  kernel: igb0: RX(0) Packets received = 7611879022
>  kernel: igb0: RX(0) Split Packets = 0
>  kernel: igb0: RX(0) Byte count = 7013625984942
>  kernel: igb0: RX(0) MSIX IRQ Handled = 3232986641
>  kernel: igb0: RX(0) LRO Queued= 0
>  kernel: igb0: RX(0) LRO Flushed= 0
>  kernel: igb0: LINK MSIX IRQ Handled = 3
>  kernel: igb0: Mbuf defrag failed = 0
>  kernel: igb0: Std mbuf header failed = 0
>  kernel: igb0: Std mbuf packet failed = 0
>  kernel: igb0: Driver dropped packets = 0
>  kernel: igb0: Driver tx dma failure in xmit = 0
> 
>  kernel: igb1: Adapter hardware address = 0xc796dc1c 
>  kernel: igb1: CTRL = 0x40c00241 RCTL = 0x48002 
>  kernel: igb1: Packet buffer = Tx=0k Rx=0k 
>  kernel: igb1: Flow control watermarks high = 63488 low =
> 61988
>  kernel: igb1: Queue(0) tdh = 4093, tdt = 4093
>  kernel: igb1: TX(0) no descriptors avail event = 0
>  kernel: igb1: TX(0) MSIX IRQ Handled = 10882048108
>  kernel: igb1: TX(0) Packets sent = 31169311987
>  kernel: igb1: Queue(0) rdh = 2515, rdt = 2513
>  kernel: igb1: RX(0) Packets received = 30747961847
>  kernel: igb1: RX(0) Split Packets = 0
>  kernel: igb1: RX(0) Byte count = 26511993282060
>  kernel: igb1: RX(0) MSIX IRQ Handled = 4834518320
>  kernel: igb1: RX(0) LRO Queued= 0
>  kernel: igb1: RX(0) LRO Flushed= 0
>  kernel: igb1: LINK MSIX IRQ Handled = 5
>  kernel: igb1: Mbuf defrag failed = 0
>  kernel: igb1: Std mbuf header failed = 0
>  kernel: igb1: Std mbuf packet failed = 0
>  kernel: igb1: Driver dropped packets = 0
>  kernel: igb1: Driver tx dma failure in xmit = 0
> 
> 
> 

I'm curious as to why you are doing a load test on a single core system
with a part that is clearly designed to be used on a multicore system?

Barney


      


More information about the freebsd-net mailing list