kern/135222: [igb] low speed routing between two igb interfaces

Barney Cordoba barney_cordoba at yahoo.com
Wed Jun 17 14:39:13 UTC 2009




--- On Fri, 6/12/09, Michael <freebsdusb at bindone.de> wrote:

> From: Michael <freebsdusb at bindone.de>
> Subject: Re: kern/135222: [igb] low speed routing between two igb interfaces
> To: freebsd-net at FreeBSD.org
> Date: Friday, June 12, 2009, 5:50 AM
> The following reply was made to PR
> kern/135222; it has been noted by GNATS.
> 
> From: Michael <freebsdusb at bindone.de>
> To: Cc: freebsd-gnats-submit at FreeBSD.org
> Subject: Re: kern/135222: [igb] low speed routing between
> two igb interfaces
> Date: Fri, 12 Jun 2009 11:45:47 +0200
> 
>  The original poster reported that the suggested fix works
> for him:
>  ---
>  Hello Michael,
>  
>  Thank you. It's working.
>  
>  I consider it necessary to put this into the release
> errata.
>  
>  
>  Mishustin Andrew wrote:
>  >> Number:     
>    135222
>  >> Category:       kern
>  >> Synopsis:       [igb]
> low speed routing between two igb interfaces
>  >> Confidential:   no
>  >> Severity:       serious
>  >> Priority:       medium
>  >> Responsible:    freebsd-bugs
>  >> State:          open
>  >> Quarter:        
>  >> Keywords:       
>  >> Date-Required:
>  >> Class:          sw-bug
>  >> Submitter-Id:   current-users
>  >> Arrival-Date:   Wed Jun 03
> 18:30:01 UTC 2009
>  >> Closed-Date:
>  >> Last-Modified:
>  >> Originator:     Mishustin
> Andrew
>  >> Release:        FreeBSD
> 7.1-RELEASE amd64, FreeBSD 7.2-RELEASE amd64
>  >> Organization:
>  > HNT
>  >> Environment:
>  > FreeBSD test.hnt 7.2-RELEASE FreeBSD 7.2-RELEASE #12:
> Thu Apr 30 18:28:15 MSD 20
>  > 09     admin at test.hnt:/usr/src/sys/amd64/compile/GENERIC 
> amd64
>  >> Description:
>  > I made a FreeBSD multiprocesor server to act as
> simple gateway.
>  > It use onboard Intel 82575EB Dual-Port Gigabit
> Ethernet Controller.
>  > I observe traffic speed near 400 Kbit/s.
>  > I test both interfaces separately -
>  > ftp client work at speed near 1 Gbit/s in both
> directions.
>  > Then I change NIC to old Intel "em" NIC - gateway
> work at speed near 1 Gbit/s.
>  > 
>  > Looks like a bug in igb driver have an effect upon
> forwarded traffic.
>  > 
>  > If you try
>  > hw.igb.enable_aim=0
>  > The speed is near 1 Mbit/s
>  > 
>  > hw.igb.rxd, hw.igb.txd, "ifconfig -tso" has no
> effect.
>  > 
>  > Nothing in messages.log
>  > 
>  > netstat -m
>  > 516/1674/2190 mbufs in use (current/cache/total)
>  > 515/927/1442/66560 mbuf clusters in use
> (current/cache/total/max)
>  > 515/893 mbuf+clusters out of packet secondary zone in
> use (current/cache)
>  > 0/44/44/33280 4k (page size) jumbo clusters in use
> (current/cache/total/max)
>  > 0/0/0/16640 9k jumbo clusters in use
> (current/cache/total/max)
>  > 0/0/0/8320 16k jumbo clusters in use
> (current/cache/total/max)
>  > 1159K/2448K/3607K bytes allocated to network
> (current/cache/total)
>  > 0/0/0 requests for mbufs denied
> (mbufs/clusters/mbuf+clusters)
>  > 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
>  > 0/0/0 sfbufs in use (current/peak/max)
>  > 0 requests for sfbufs denied
>  > 0 requests for sfbufs delayed
>  > 0 requests for I/O initiated by sendfile
>  > 0 calls to protocol drain routines
>  > 
>  > I use only IPv4 traffic.
>  > 
>  >> How-To-Repeat:
>  > On machine with two igb interfaces
>  > use rc.conf like this:
>  > 
>  > hostname="test.test"
>  > gateway_enable="YES"
>  > ifconfig_igb0="inet 10.10.10.1/24"
>  > ifconfig_igb1="inet 10.10.11.1/24"
>  > 
>  > And try create heavy traffic between two networks.
>  >> Fix:
>  > 
>  > 
>  >> Release-Note:
>  >> Audit-Trail:
>  >> Unformatted:
>  > _______________________________________________
>  > freebsd-bugs at freebsd.org


This is not a bug. Unless you consider poorly written drivers to be bugs. You need to provide your tuning parameters for the card as well otherwise there's nothing to learn.

The issue is that the driver doesn't address the purpose of the controller; which is to utilize multiprocessor systems more effectively. The effect is that lock contention actually makes things worse than if you just use a single task as em does. Until the multiqueue drivers are re-written to manage locks properly you are best advised to save your money and stick with em.

You should get similar performance using 1 queue as with em. You could also force legacy configuration by forcing igb_setup_msix to return 0. Sadly, this is the best performance you will get from the stock driver.

Barney

Barney


      


More information about the freebsd-net mailing list