two problems in dev/e1000/if_lem.c::lem_handle_rxtx()

Barney Cordoba barney_cordoba at yahoo.com
Thu Jan 17 16:09:00 UTC 2013



--- On Wed, 1/16/13, Luigi Rizzo <rizzo at iet.unipi.it> wrote:

> From: Luigi Rizzo <rizzo at iet.unipi.it>
> Subject: Re: two problems in dev/e1000/if_lem.c::lem_handle_rxtx()
> To: "Barney Cordoba" <barney_cordoba at yahoo.com>
> Cc: freebsd-net at freebsd.org
> Date: Wednesday, January 16, 2013, 9:55 PM
> On Wed, Jan 16, 2013 at 06:19:01AM
> -0800, Barney Cordoba wrote:
> > 
> > 
> > --- On Tue, 1/15/13, Luigi Rizzo <rizzo at iet.unipi.it>
> wrote:
> > 
> > > From: Luigi Rizzo <rizzo at iet.unipi.it>
> > > Subject: two problems in
> dev/e1000/if_lem.c::lem_handle_rxtx()
> > > To: head at freebsd.org,
> "freebsd-net at freebsd.org"
> <freebsd-net at freebsd.org>
> > > Cc: "Jack Vogel" <jfvogel at gmail.com>
> > > Date: Tuesday, January 15, 2013, 8:23 PM
> > > Hi,
> > > i found a couple of problems in
> > > ? ? ? ?
> > > dev/e1000/if_lem.c::lem_handle_rxtx() ,
> > > (compare with dev/e1000/if_em.c::em_handle_que()
> for better
> > > understanding):
> > > 
> > > 1. in if_em.c::em_handle_que(), when em_rxeof()
> exceeds the
> > > ? rx_process_limit, the task is rescheduled so it
> can
> > > complete the work.
> > > ? Conversely, in if_lem.c::lem_handle_rxtx() the
> > > lem_rxeof() is
> > > ? only run once, and if there are more pending
> packets
> > > the only
> > > ? chance to drain them is to receive (many) more
> > > interrupts.
> > > 
> > > ? This is a relatively serious problem, because
> the
> > > receiver has
> > > ? a hard time recovering.
> > > 
> > > ? I'd like to commit a fix to this same as it is
> done
> > > in e1000.
> > > 
> > > 2. in if_em.c::em_handle_que(), interrupts are
> reenabled
> > > unconditionally,
> > > ???whereas lem_handle_rxtx() only enables
> > > them if IFF_DRV_RUNNING is set.
> > > 
> > > ???I cannot really tell what is the correct
> > > way here, so I'd like
> > > ???to put a comment there unless there is a
> > > good suggestion on
> > > ???what to do.
> > > 
> > > ???Accesses to the intr register are
> > > race-prone anyways
> > > ???(disabled in fastintr, enabled in the rxtx
> > > task without
> > > ???holding any lock, and generally accessed
> > > under EM_CORE_LOCK
> > > ???in other places), and presumably
> > > enabling/disabling the
> > > ???interrupts around activations of the taks
> > > is just an
> > > ???optimization (and on a VM, it is actually
> > > a pessimization
> > > ???due to the huge cost of VM exits).
> > > 
> > > cheers
> > > luigi
> > 
> > This is not really a big deal; this is how things works
> for a million 
> > years before we had task queues.
> 
> i agree that the second issue is not a big deal.
> 
> The first one, on the contrary, is a real problem no matter
> how you set the 'work' parameter (unless you make it large
> enough to drain the entire queue in one call).

Which should be the goal, except in extreme circumstances. Having more
packets than "work" should be the extreme case and not the norm.

All work should do is normalize bursts of packets. If you're consistently
over work then either your work parameter is too low, or your interrupt
moderation is too wide. Adding a cleanup task simply compensates for bad
tuning.

BC


More information about the freebsd-net mailing list