netmap_mem_get_info

Vincenzo Maffione v.maffione at gmail.com
Sat Feb 18 09:36:12 UTC 2017


For each netmap memory area (in this case you use only the global one):

- if_num must be >= of the total number of NIOCREGIF on the netmap memory
area. The default is 100 and it should be ok.

- Then you have to count how many TX and RX rings you use in total in all
the interfaces you NIOCREGIF'd in the memory area.
So using your terminology, in the worst case you have (K + K+1 + 1) rings
(NIC Tx+ NIC rx +  host tx + host rx) from the outside interface and
(L+L+1+1) rings for the inside interface. You also have 2 rings for each
couple of pipes you are using.
So ring_num must be >= than the rings count.

- Then you need to count the total number of slots in all those rings. Each
ring may have differenet number of descriptors, so you need to check it out
that for each interface (e.g. "ethtool -g ethX" on linux, probably some
sysctl on freebsd).
Pipes by default have the same number of slots of their parent adapter.
If x is the total number of slots, then it must be buf_num >= x.

- Finally, you have to look for the ring(s) that has the maximum number of
slots. If that number is y, then it must be ring_size >= sizeof(struct
netmap_slot)*y + sizeof(struct netmap_ring).

Cheers,
  Vincenzo

2017-02-16 21:58 GMT+01:00 Slawa Olhovchenkov <slw at zxy.spb.ru>:

> On Thu, Feb 16, 2017 at 09:48:14PM +0100, Vincenzo Maffione wrote:
>
> > Not sure about what you mean. Until memory areas are in use the real
> values
> > (*_num, *_size) are not changed.
> > At NIOCREGIF time you can say what allocator you are interested in by
> > writing a non-zero id inside req.nr_arg2.
>
> My application have N balancer thread and M worker thread.
> Application serve outside interface w/ K rings (and ring size N) and
> inside interface w/ L rings (and ring size M).
>
> All balancer threads share rings from interfaces in next maner:
>
> first thread open descriptors to 1st ring of inside and outside interface
> next thread open descriptors to 2nd ring of inside and outside interface
>
> and etc in round-roubin.
>
> every balancer thread open 2 pipe to every worker thread.
>
> how many if_num, ring_num, buf_num I am need to configure?
>
> > 2017-02-16 21:38 GMT+01:00 Slawa Olhovchenkov <slw at zxy.spb.ru>:
> >
> > > On Thu, Feb 16, 2017 at 09:14:19PM +0100, Vincenzo Maffione wrote:
> > >
> > > > Hi,
> > > >   You're right, we'll try to add more details.
> > > >
> > > > In any case, buf_size, ring_size and if_size are the sizes in bytes
> of
> > > each
> > > > buffer, ring and netmap_if (control data structure), respectively.
> > > > So the maximum amount of slots for each ring is ring_size/16, as 16
> is
> > > the
> > > > size in bytes of struct netmap_slot.
> > > >
> > > > On the other side, buf_num, ring_num and if_num are the total number
> of
> > > > biffers, rings and netmap_if objects in each netmap memory area (aka
> > > > "allocator").
> > > > By default there is a single memory area used by all the hardware
> NICs
> > > and
> > > > a separate memory area for each VALE port.
> > > > This is already configurable, however.
> > >
> > > This values also depends on open netmap descriptors, right?
> > >
> > > > 2017-02-14 13:36 GMT+01:00 Slawa Olhovchenkov <slw at zxy.spb.ru>:
> > > >
> > > > > On Tue, Feb 14, 2017 at 12:26:55PM +0100, Vincenzo Maffione wrote:
> > > > >
> > > > > > Hi,
> > > > > >   Have you tried to play with netmap sysctl parameters like:
> > > > > >
> > > > > > dev.netmap.buf_num
> > > > > > dev.netmap.ring_num
> > > > > > dev.netmap.if_num
> > > > > >
> > > > > > those are listed in the sysctl section of the netmap man page.
> > > > >
> > > > > man page hide details about calcul rules of this parameters.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Vincenzo Maffione
> > >
> >
> >
> >
> > --
> > Vincenzo Maffione
>



-- 
Vincenzo Maffione


More information about the freebsd-net mailing list