svn commit: r280759 - head/sys/netinet

Hans Petter Selasky hps at selasky.org
Mon Mar 30 11:29:06 UTC 2015


On 03/30/15 12:59, Gleb Smirnoff wrote:
> On Mon, Mar 30, 2015 at 10:51:51AM +0200, Hans Petter Selasky wrote:
> H> Hi,
> H>
> H> Like was mentioned here, maybe we need a global counter that is not
> H> accessed that frequently, and use per-cpu counters for the most frequent
> H> accesses. To keep the order somewhat sane, we need a global counter:
> H>
> H> Pseudo code:
> H>
> H> static int V_ip_id;
> H>
> H> PER_CPU(V_ip_id_start);
> H> PER_CPU(V_ip_id_end);
> H>
> H> static uint16_t
> H> get_next_id()
> H> {
> H> if (PER_CPU(V_ip_id_start) == PER_CPU(V_ip_id_end)) {
> H> 	next = atomic_add32(&V_ip_id, 256);
> H> 	V_ip_id_start = next;
> H> 	V_ip_id_end = next + 256;
> H> }
> H> id = V_ip_id_start++;
> H> return (id);
> H> }
>
> What's the rationale of the code? Trying to keep CPUs off by 256 from
> each other?

Hi,

You don't get it fully. Every time a CPU runs out of IDs, it allocates a 
new 256 long series of numbers. That way the CPUs allocate numbers in 
sequence.

> The suggested code suffers from migration more than what I suggested. E.g.
> you can assign V_ip_id_start on CPU 1 then migrate to CPU 2 and assign
> V_ip_id_end, yielding in the broken state of the ID generating machine.
> Or you can compare start and end on different CPUs, which causes less harm.

Surely we need to add the critial_enter() and critical_exit() around 
this code, it is just meant as an example.

>
> And still the code doesn't protect against full 65k overflow. One CPU
> can emit a burst over 65k packets, and then go on and reuse all the IDs
> that other CPUs are using now.
>

Given sending 65K packets will take some time, using a shared atomic 
operation will slow down this wraparound more than if it was per CPU. If 
this is an argument, why do you want to make the ID allocation faster 
and not slower? Should there perhaps be a DELAY() in there if too many 
IDs rush out too quickly?

--HPS


More information about the svn-src-head mailing list