CFR projects/pf: vnet awareness for pf_overloadqueue

Martin Matuska mm at FreeBSD.org
Sun Mar 30 08:27:21 UTC 2014


Hi,

with the pf_mtag_z patch applied, the second patch that fixes panics I
experience is the overload queue patch.

I have looked into solving this via context (adding a "struct vnet"
member to pf_overload_entry). This leaves unsolved problems - first, we
have now vnet information on per-entry instead of per-queue.

There are two places in pf_overload_task() where we are not processing
an entry but need vnet information:

1. V_pf_idhash[i] in pf_overload_task():

        for (int i = 0; i <= pf_hashmask; i++) {
                struct pf_idhash *ih = &V_pf_idhash[i];
                struct pf_state_key *sk;
                struct pf_state *s;

                PF_HASHROW_LOCK(ih);
                LIST_FOREACH(s, &ih->states, entry) {

2. end of pf_overload_task() but that is only the debug tunable
V_pf_status_debug:

        if (V_pf_status.debug >= PF_DEBUG_MISC)
                printf("%s: %u states killed", __func__, killed);

On the other hand, if we want to keep per-vnet overloadqueues than it
makes sense to store vnet information on queue level.
If we pack vnet information into each entry and the overloadqueue has
global locks anyway, why not keeping a single global queue with entries
from different vnets?

At the current state the code causes panics if pf_overload_task() is
fired because vnet context is missing. It needs to be fixed in any of
the ways. A patch for adding per-queue vnet information is attached.

Thank you.
mm
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pf_overloadqueue.patch
Type: text/x-patch
Size: 3529 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-pf/attachments/20140330/d3dea6c3/attachment.bin>


More information about the freebsd-pf mailing list