kern/135222: [igb] low speed routing between two igb interfaces

Barney Cordoba barney_cordoba at yahoo.com
Fri Jun 19 14:56:02 UTC 2009




--- On Wed, 6/17/09, Michael <freebsdusb at bindone.de> wrote:

> From: Michael <freebsdusb at bindone.de>
> Subject: Re: kern/135222: [igb] low speed routing between two igb interfaces
> To: freebsd-net at FreeBSD.org
> Date: Wednesday, June 17, 2009, 9:40 PM
> The following reply was made to PR
> kern/135222; it has been noted by GNATS.
> 
> From: Michael <freebsdusb at bindone.de>
> To: Barney Cordoba <barney_cordoba at yahoo.com>
> Cc: freebsd-gnats-submit at FreeBSD.org
> Subject: Re: kern/135222: [igb] low speed routing between
> two igb interfaces
> Date: Thu, 18 Jun 2009 03:32:15 +0200
> 
>  Barney Cordoba wrote:
>  > 
>  > 
>  > --- On Wed, 6/17/09, Michael <freebsdusb at bindone.de>
> wrote:
>  > 
>  >> From: Michael <freebsdusb at bindone.de>
>  >> Subject: Re: kern/135222: [igb] low speed routing
> between two igb interfaces
>  >> To: "Barney Cordoba" <barney_cordoba at yahoo.com>
>  >> Cc: freebsd-net at FreeBSD.org
>  >> Date: Wednesday, June 17, 2009, 5:28 PM
>  >> Barney Cordoba wrote:
>  >>>
>  >>> --- On Fri, 6/12/09, Michael <freebsdusb at bindone.de>
>  >> wrote:
>  >>>> From: Michael <freebsdusb at bindone.de>
>  >>>> Subject: Re: kern/135222: [igb] low speed
> routing
>  >> between two igb interfaces
>  >>>> To: freebsd-net at FreeBSD.org
>  >>>> Date: Friday, June 12, 2009, 5:50 AM
>  >>>> The following reply was made to PR
>  >>>> kern/135222; it has been noted by GNATS.
>  >>>>
>  >>>> From: Michael <freebsdusb at bindone.de>
>  >>>> To: Cc: freebsd-gnats-submit at FreeBSD.org
>  >>>> Subject: Re: kern/135222: [igb] low speed
> routing
>  >> between
>  >>>> two igb interfaces
>  >>>> Date: Fri, 12 Jun 2009 11:45:47 +0200
>  >>>>
>  >>>>   The original poster
> reported that the
>  >> suggested fix works
>  >>>> for him:
>  >>>>   ---
>  >>>>   Hello Michael,
>  >>>>   
>  >>>>   Thank you. It's
> working.
>  >>>>   
>  >>>>   I consider it necessary
> to put this into the
>  >> release
>  >>>> errata.
>  >>>>   
>  >>>>   
>  >>>>   Mishustin Andrew wrote:
>  >>>>   >> Number: 
>    
>  >>>>     135222
>  >>>>   >>
> Category:   
>  >>    kern
>  >>>>   >>
> Synopsis:   
>  >>    [igb]
>  >>>> low speed routing between two igb
> interfaces
>  >>>>   >>
> Confidential:   no
>  >>>>   >>
> Severity:   
>  >>    serious
>  >>>>   >>
> Priority:   
>  >>    medium
>  >>>>   >>
> Responsible:   
>  >> freebsd-bugs
>  >>>>   >> State: 
>      
>  >>   open
>  >>>>   >> Quarter: 
>      
>  >>>>   >>
> Keywords:   
>  >>    
>  >>>>   >> Date-Required:
>  >>>>   >> Class: 
>      
>  >>   sw-bug
>  >>>>   >>
>  >> Submitter-Id:   current-users
>  >>>>   >>
> Arrival-Date:   Wed
>  >> Jun 03
>  >>>> 18:30:01 UTC 2009
>  >>>>   >> Closed-Date:
>  >>>>   >> Last-Modified:
>  >>>>   >> Originator: 
>  >>    Mishustin
>  >>>> Andrew
>  >>>>   >> Release: 
>      
>  >> FreeBSD
>  >>>> 7.1-RELEASE amd64, FreeBSD 7.2-RELEASE
> amd64
>  >>>>   >> Organization:
>  >>>>   > HNT
>  >>>>   >> Environment:
>  >>>>   > FreeBSD test.hnt
> 7.2-RELEASE FreeBSD
>  >> 7.2-RELEASE #12:
>  >>>> Thu Apr 30 18:28:15 MSD 20
>  >>>>   > 09 
>    admin at test.hnt:/usr/src/sys/amd64/compile/GENERIC
>  >>>> amd64
>  >>>>   >> Description:
>  >>>>   > I made a FreeBSD
> multiprocesor server
>  >> to act as
>  >>>> simple gateway.
>  >>>>   > It use onboard
> Intel 82575EB Dual-Port
>  >> Gigabit
>  >>>> Ethernet Controller.
>  >>>>   > I observe traffic
> speed near 400
>  >> Kbit/s.
>  >>>>   > I test both
> interfaces separately -
>  >>>>   > ftp client work at
> speed near 1 Gbit/s
>  >> in both
>  >>>> directions.
>  >>>>   > Then I change NIC
> to old Intel "em" NIC
>  >> - gateway
>  >>>> work at speed near 1 Gbit/s.
>  >>>>   > 
>  >>>>   > Looks like a bug in
> igb driver have an
>  >> effect upon
>  >>>> forwarded traffic.
>  >>>>   > 
>  >>>>   > If you try
>  >>>>   >
> hw.igb.enable_aim=0
>  >>>>   > The speed is near 1
> Mbit/s
>  >>>>   > 
>  >>>>   > hw.igb.rxd,
> hw.igb.txd, "ifconfig -tso"
>  >> has no
>  >>>> effect.
>  >>>>   > 
>  >>>>   > Nothing in
> messages.log
>  >>>>   > 
>  >>>>   > netstat -m
>  >>>>   > 516/1674/2190 mbufs
> in use
>  >> (current/cache/total)
>  >>>>   > 515/927/1442/66560
> mbuf clusters in
>  >> use
>  >>>> (current/cache/total/max)
>  >>>>   > 515/893
> mbuf+clusters out of packet
>  >> secondary zone in
>  >>>> use (current/cache)
>  >>>>   > 0/44/44/33280 4k
> (page size) jumbo
>  >> clusters in use
>  >>>> (current/cache/total/max)
>  >>>>   > 0/0/0/16640 9k
> jumbo clusters in use
>  >>>> (current/cache/total/max)
>  >>>>   > 0/0/0/8320 16k
> jumbo clusters in use
>  >>>> (current/cache/total/max)
>  >>>>   > 1159K/2448K/3607K
> bytes allocated to
>  >> network
>  >>>> (current/cache/total)
>  >>>>   > 0/0/0 requests for
> mbufs denied
>  >>>> (mbufs/clusters/mbuf+clusters)
>  >>>>   > 0/0/0 requests for
> jumbo clusters
>  >> denied (4k/9k/16k)
>  >>>>   > 0/0/0 sfbufs in use
> (current/peak/max)
>  >>>>   > 0 requests for
> sfbufs denied
>  >>>>   > 0 requests for
> sfbufs delayed
>  >>>>   > 0 requests for I/O
> initiated by
>  >> sendfile
>  >>>>   > 0 calls to protocol
> drain routines
>  >>>>   > 
>  >>>>   > I use only IPv4
> traffic.
>  >>>>   > 
>  >>>>   >> How-To-Repeat:
>  >>>>   > On machine with two
> igb interfaces
>  >>>>   > use rc.conf like
> this:
>  >>>>   > 
>  >>>>   >
> hostname="test.test"
>  >>>>   >
> gateway_enable="YES"
>  >>>>   > ifconfig_igb0="inet
> 10.10.10.1/24"
>  >>>>   > ifconfig_igb1="inet
> 10.10.11.1/24"
>  >>>>   > 
>  >>>>   > And try create
> heavy traffic between
>  >> two networks.
>  >>>>   >> Fix:
>  >>>>   > 
>  >>>>   > 
>  >>>>   >> Release-Note:
>  >>>>   >> Audit-Trail:
>  >>>>   >> Unformatted:
>  >>>>   >
>  >> _______________________________________________
>  >>>>   > freebsd-bugs at freebsd.org
>  >>>
>  >>> This is not a bug. Unless you consider poorly
> written
>  >> drivers to be bugs. You need to provide your
> tuning
>  >> parameters for the card as well otherwise there's
> nothing to
>  >> learn.
>  >>> The issue is that the driver doesn't address
> the
>  >> purpose of the controller; which is to utilize
>  >> multiprocessor systems more effectively. The
> effect is that
>  >> lock contention actually makes things worse than
> if you just
>  >> use a single task as em does. Until the
> multiqueue drivers
>  >> are re-written to manage locks properly you are
> best advised
>  >> to save your money and stick with em.
>  >>> You should get similar performance using 1
> queue as
>  >> with em. You could also force legacy
> configuration by
>  >> forcing igb_setup_msix to return 0. Sadly, this
> is the best
>  >> performance you will get from the stock driver.
>  >>> Barney
>  >>>
>  >>> Barney
>  >>>
>  >>>
>  >>>        
>  >> I tried using 1 queue and it didn't make things
> any better
>  >> (actually I'm
>  >> not sure if that worked at all). If it is
> considered a bug
>  >> or not
>  >> doesn't really matter, what actually matters for
> users (who
>  >> cannot
>  >> always chose which network controller will be
> on-board) is
>  >> that they get
>  >> a least decent performance when doing IP
> forwarding (and
>  >> not the
>  >> 5-50kb/s I've seen). You can get this out of the
>  >> controller, when
>  >> disabling lro through the sysctl. That's why I've
> been
>  >> asking to put
>  >> this into the release errata section and/or at
> least the
>  >> igb man page,
>  >> because the sysctl isn't documented anywhere.
> Also the
>  >> fact, that tuning
>  >> the sysctl only affects the behaviour when it's
> set on boot
>  >> might be
>  >> considered problematic.
>  >>
>  >> So at the very least, I think the following
> should be
>  >> done:
>  >> 1. Document the sysctl in man igb(4)
>  >> 2. Put a known issues paragraph to man igb(4)
> which
>  >> explains the issue
>  >> and what to put in sysctl.conf to stop this from
> happening
>  >> 3. Add an entry to the release errata page about
> this issue
>  >> (like I
>  >> suggested in one of my earlier emails) and
> stating
>  >> something like "see
>  >> man igb(4) for details)
>  >>
>  >> This is not about using the controller to its
> full
>  >> potential, but to
>  >> safe Joe Admin from spending days on figuring out
> why the
>  >> machine is
>  >> forwarding packages slower than his BSD 2.x
> machine did in
>  >> the 90s.
>  >>
>  >> cheers
>  >> Michael
>  > 
>  > None of the offload crap should be enabled by
> default. 
>  > 
>  > The real point is that "Joe Admin" shouldn't be using
> controllers that have bad drivers at all. If you have to use
> whatever hardware you have laying around, and don't have
> enough flexibility to lay out $100 for a 2 port controller
> that works to use with your $2000 server, than you need to
> get your priorities in order. People go out and buy
> redundant power supplies, high GHZ quad core processors and
> gobs of memory and then they use whatever crappy onboard
> controller they get no matter how poorly its suppo rted. Its
> mindless.
>  > 
>  > Barney
>  > 
>  > 
>  >       
>  
>  How should anybody know that the controller is poorly
> supported if there
>  is nothing in the documentation, release notes, man pages
> or anywhere
>  else about this?
>  
>  The fact of the matter is that "the offload crap" _is_
> enabled by
>  default. The release is out, it claims to support the
> controller. There
>  _is_ a workaround and I'm asked if somebody could document
> this so users
>  will have a chance. I'm also not convinced that it is a
> crappy
>  controller per se, but just poorly supported. We used
> those a lot before
>  without any issues, unfortunately now we had touse IP
> forwarding in a
>  machine that has that controller (it has 6 interfaces in
> total, four em
>  ports and two igb ports, all of them are in use and I
> don't feel like
>  hooking up the sodering iron).
>  
>  So bottomline:
>  I said, there is a problem with the driver, there is a
> workaround and it
>  should be documented.
>  
>  You say, the driver is bad and nobody should use it and if
> they do it's
>  their own damn fault. We won't do anything about it and
> refuse to tell
>  anybody, because we are the only ones who should know. We
> don't care if
>  people can actually use our software and still claim the
> hardware is
>  actually supported.
>  
>  Your attitude is really contra productive (actually
> googling around I
>  see  you made similar statements in the past about
> stupid people not
>  willing to spend xxx$ on whatever piece of hardware, so
> maybe you're
>  just trolling).
>  
>  Michael

Tuning the card to be brain-dead isn't really a workaround. I'm sorry that you're not able to understand, but you can't educate the woodchucks, so carry on and feel free to do whatever you wish.

BC


      


More information about the freebsd-net mailing list