netmap overrun counters

Luigi Rizzo rizzo at iet.unipi.it
Fri Apr 29 07:04:13 UTC 2016


On Thu, Apr 28, 2016 at 02:53:25PM -0700, bazzoola wrote:
> Thanks Adrian, and thanks Luigi for the explanation:
> 
> On 04/28/2016 01:15 PM, Luigi Rizzo wrote:
> > 
> > please re-read the relevant part of the manual page:
> > 
> >    RECEIVE RINGS
> >      On receive rings, after a netmap system call, the slots in the range
> >      head... tail-1 contain received packets.  User code should process them
> >      and advance head and cur past slots it wants to return to the kernel.
> >      cur may be moved further ahead if the user code wants to wait for more
> >      packets without returning all the previous slots to the kernel.
> > 
> >      At the next NIOCRXSYNC/select()/poll(), slots up to head-1 are returned
> >      to the kernel for further receives, and tail may advance to report new
> >      incoming packets.
> >      Below is an example of the evolution of an RX ring:
> > 
> >          after the syscall, there are some (h)eld and some (R)eceived slots
> >                 head  cur     tail
> >                  |     |       |
> >                  v     v       v
> >           RX  [..hhhhhhRRRRRRRR..........]
> > 
> >          user advances head and cur, releasing some slots and holding others
> >                     head cur  tail
> >                       |  |     |
> >                       v  v     v
> >           RX  [..*****hhhRRRRRR...........]
> > 
> >          NICRXSYNC/poll()/select() recovers slots and reports new packets
> >                     head cur        tail
> >                       |  |           |
> >                       v  v           v
> >           RX  [.......hhhRRRRRRRRRRRR....]
> > 
> > 
> > tail advances if there are new packets _and_ can at most go one
> > slot before head. At that point the buffer is full and the NIC
> > starts dropping packets until your application consumes packets,
> > advance???s head+cur and makes room so that the NIC can copy new
> > 
> > packets to the buffers and the driver advances tail
> > 
> >     Basically, all I am trying to do is detect if frames are dropped in my
> >     application using netmap API.
> > 
> 
> I am looking at
> https://www.freebsd.org/cgi/man.cgi?query=netmap&manpath=FreeBSD+11-current
> 
> "Passing the NETMAP_DO_RX_POLL flag to NIOCREGIF updates receive rings
> even without read events"
> 
> This means that even if I don't update cur/head pointers in my
> application then netmap will keep updating its rings. Is statement
> correct? If yes, how is this useful if tail doesn't increment.

your interpretation is not correct. Update the ring means that tai
advances only up to the available space. If you don't update head/cur,
then when the ring is full tail will stop there and the NIC will start
dropping packets.

> 
> > 
> > ???wrong model :)
> > netmap per se never drops packets because it does not run code.
> > 
> > If your application does not read fast enough it is the NIC that
> > drops packets, and counts them as overrun; netmap cannot know
> > how many of them.
> 
> I have a simple test (without NETMAP_DO_RX_POLL set), I send UDP packets
> with a known counter and monitor that counter in my netmap application.
> If there is a mismatch I know a packet was dropped.
> 
> I also monitor sysctl em.0.overrun before and after I run the program
> and it stays 0.
> 
> After around 20 seconds of of capturing frames (and storing them in
> memory), Swap kicks in (my program starts paging) and I detect the 1st
> dropped packet using my application. However, sysctl *overrun for em
> never reports drops. Should I look at a different stat?

I have no idea. This is NIC specific, there may be other stats that
report the queue overflows.

cheers
luigi


More information about the freebsd-net mailing list