svn commit: r226801 - stable/9/sys/contrib/pf/net

Gleb Smirnoff glebius at FreeBSD.org
Wed Oct 26 17:09:10 UTC 2011


Author: glebius
Date: Wed Oct 26 17:09:09 2011
New Revision: 226801
URL: http://svn.freebsd.org/changeset/base/226801

Log:
  Sync pf(4) and pfsync(4) with head, merging lots of important bugfixes
  required for normal operation of pfsync(4). Revisions merged:
  
    r226531 | bz | 2011-10-19 13:34:40 +0400 (ср, 19 окт 2011) | 4 lines
  
    Fix an obvious locking bug where we would lock again rather than unlock.
  
    r226532 | bz | 2011-10-19 14:04:24 +0400 (ср, 19 окт 2011) | 12 lines
  
    Pseudo interfaces should go at SI_SUB_PSEUDO.  However at least
    pfsync also depends on pf to be initialized already so pf goes at
    FIRST and the interfaces go at ANY.
    Then the (VNET_)SYSINIT startups for pf stays at SI_SUB_PROTO_BEGIN
    and for pfsync we move to the later SI_SUB_PROTO_IF.
  
    This is not ideal either but at least an order that should work for
    the moment and can be re-fined with the VIMAGE merge, once this will
    actually work with more than one network stack.
  
    r226533 | bz | 2011-10-19 14:08:58 +0400 (ср, 19 окт 2011) | 4 lines
  
    In the non-FreeBSD case we do not expect PF_LOCK and friends to do anything.
  
    r226535 | bz | 2011-10-19 14:16:42 +0400 (ср, 19 окт 2011) | 5 lines
  
    Adjust the PF_ASSERT() macro to what we usually use in the network stack:
    PF_LOCK_ASSERT() and PF_UNLOCK_ASSERT().
  
    r226536 | bz | 2011-10-19 15:04:49 +0400 (ср, 19 окт 2011) | 8 lines
  
    De-virtualize the pf_task_mtx lock.  At the current state of pf locking
    and virtualization it is not helpful but complicates things.
  
    Current state of art is to not virtualize these kinds of locks -
    inp_group/hash/info/.. are all not virtualized either.
  
    r226544 | bz | 2011-10-19 17:13:56 +0400 (ср, 19 окт 2011) | 12 lines
  
    Fix recursive pf locking leading to panics.  Splatter PF_LOCK_ASSERT()s
    to document where we are expecting to be called with a lock held to
    more easily catch unnoticed code paths.
    This does not neccessarily improve locking in pfsync, it just tries
    to avoid the panics reported.
  
    PR:		kern/159390, kern/158873
    Submitted by:	pluknet (at least something that partly resembles
    		my patch ignoring other cleanup, which I only saw
    		too late on the 2nd PR)
  
    r226609 | glebius | 2011-10-21 15:11:18 +0400 (пт, 21 окт 2011) | 4 lines
  
    In FreeBSD ip_output() expects ip_len and ip_off in host byte order
  
    PR:		kern/159029
  
    r226623 | glebius | 2011-10-22 02:28:15 +0400 (сб, 22 окт 2011) | 5 lines
  
    Fix a race: we should update sc_len before dropping the pf lock, otherwise a
    number of packets can be queued on sc, while we are in ip_output(), and then
    we wipe the accumulated sc_len. On next pfsync_sendout() that would lead to
    writing beyond our mbuf cluster.
  
    r226655 | glebius | 2011-10-23 14:05:25 +0400 (вс, 23 окт 2011) | 5 lines
  
    Correct flag for uma_zalloc() is M_WAITOK. M_WAIT is an old and
    deprecated flag from historical mbuf(9) allocator.
  
    This is style only change.
  
    r226656 | glebius | 2011-10-23 14:13:20 +0400 (вс, 23 окт 2011) | 5 lines
  
    Absense of M_WAITOK in malloc flags for UMA doesn't
    equals presense of M_NOWAIT. Specify M_NOWAIT explicitly.
  
    This fixes sleeping with PF_LOCK().
  
    r226660 | glebius | 2011-10-23 18:59:54 +0400 (вс, 23 окт 2011) | 22 lines
  
    Fix from r226623 is not sufficient to close all races in pfsync(4).
  
    The root of problem is re-locking at the end of pfsync_sendout().
    Several functions are calling pfsync_sendout() holding pointers
    to pf data on stack, and these functions expect this data to be
    consistent.
  
    To fix this, the following approach was taken:
  
    - The pfsync_sendout() doesn't call ip_output() directly, but
      enqueues the mbuf on sc->sc_ifp's interfaces queue, that
      is currently unused. Then pfsync netisr is scheduled. PF_LOCK
      isn't dropped in pfsync_sendout().
    - The netisr runs through queue and ip_output()s packets
      on it.
  
    Apart from fixing race, this also decouples stack, fixing
    potential issues, that may happen, when sending pfsync(4)
    packets on input path.
  
    Reviewed by:	eri (a quick review)
  
    r226661 | glebius | 2011-10-23 19:08:18 +0400 (вс, 23 окт 2011) | 13 lines
  
    - Fix a bad typo (FreeBSD specific) in pfsync_bulk_update(). Instead
      of scheduling next run pfsync_bulk_update(), pfsync_bulk_fail()
      was scheduled.
      This lead to instant 100% state leak after first bulk update
      request.
    - After above fix, it appeared that pfsync_bulk_update() lacks
      locking. To fix this, sc_bulk_tmo callout was converted to an
      mtx one. Eventually, all pf/pfsync callouts should be converted
      to mtx version, since it isn't possible to stop or drain a
      non-mtx callout without risk of race.
    - Add comment that callout_stop() in pfsync_clone_destroy() lacks
      locking. Since pfsync0 can't be destroyed (yet), let it be here.
  
    r226662 | glebius | 2011-10-23 19:10:15 +0400 (вс, 23 окт 2011) | 2 lines
  
    Fix indentation, no code changed.
  
    r226663 | glebius | 2011-10-23 19:15:17 +0400 (вс, 23 окт 2011) | 4 lines
  
    Merge several fixes to bulk update processing from OpenBSD. Merged
    revisions: 1.148, 1.149, 1.150. This makes number of states on
    master/slave to be of a sane value.
  
  Approved by:	re (kib)

Modified:
  stable/9/sys/contrib/pf/net/if_pflog.c
  stable/9/sys/contrib/pf/net/if_pfsync.c
  stable/9/sys/contrib/pf/net/pf_ioctl.c
  stable/9/sys/contrib/pf/net/pf_table.c
  stable/9/sys/contrib/pf/net/pfvar.h
Directory Properties:
  stable/9/sys/   (props changed)
  stable/9/sys/amd64/include/xen/   (props changed)
  stable/9/sys/boot/   (props changed)
  stable/9/sys/boot/i386/efi/   (props changed)
  stable/9/sys/boot/ia64/efi/   (props changed)
  stable/9/sys/boot/ia64/ski/   (props changed)
  stable/9/sys/boot/powerpc/boot1.chrp/   (props changed)
  stable/9/sys/boot/powerpc/ofw/   (props changed)
  stable/9/sys/cddl/contrib/opensolaris/   (props changed)
  stable/9/sys/conf/   (props changed)
  stable/9/sys/contrib/dev/acpica/   (props changed)
  stable/9/sys/contrib/octeon-sdk/   (props changed)
  stable/9/sys/contrib/pf/   (props changed)
  stable/9/sys/contrib/x86emu/   (props changed)

Modified: stable/9/sys/contrib/pf/net/if_pflog.c
==============================================================================
--- stable/9/sys/contrib/pf/net/if_pflog.c	Wed Oct 26 17:04:26 2011	(r226800)
+++ stable/9/sys/contrib/pf/net/if_pflog.c	Wed Oct 26 17:09:09 2011	(r226801)
@@ -429,7 +429,7 @@ static moduledata_t pflog_mod = { "pflog
 
 #define PFLOG_MODVER 1
 
-DECLARE_MODULE(pflog, pflog_mod, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY);
+DECLARE_MODULE(pflog, pflog_mod, SI_SUB_PSEUDO, SI_ORDER_ANY);
 MODULE_VERSION(pflog, PFLOG_MODVER);
 MODULE_DEPEND(pflog, pf, PF_MODVER, PF_MODVER, PF_MODVER);
 #endif /* __FreeBSD__ */

Modified: stable/9/sys/contrib/pf/net/if_pfsync.c
==============================================================================
--- stable/9/sys/contrib/pf/net/if_pfsync.c	Wed Oct 26 17:04:26 2011	(r226800)
+++ stable/9/sys/contrib/pf/net/if_pfsync.c	Wed Oct 26 17:09:09 2011	(r226801)
@@ -493,7 +493,7 @@ pfsync_clone_create(struct if_clone *ifc
 	ifp->if_mtu = 1500; /* XXX */
 #ifdef __FreeBSD__
 	callout_init(&sc->sc_tmo, CALLOUT_MPSAFE);
-	callout_init(&sc->sc_bulk_tmo, CALLOUT_MPSAFE);
+	callout_init_mtx(&sc->sc_bulk_tmo, &pf_task_mtx, 0);
 	callout_init(&sc->sc_bulkfail_tmo, CALLOUT_MPSAFE);
 #else
 	ifp->if_hardmtu = MCLBYTES; /* XXX */
@@ -540,7 +540,7 @@ pfsync_clone_destroy(struct ifnet *ifp)
 #ifdef __FreeBSD__
 	EVENTHANDLER_DEREGISTER(ifnet_departure_event, sc->sc_detachtag);
 #endif
-	timeout_del(&sc->sc_bulk_tmo);
+	timeout_del(&sc->sc_bulk_tmo);	/* XXX: need PF_LOCK() before */
 	timeout_del(&sc->sc_tmo);
 #if NCARP > 0
 #ifdef notyet
@@ -714,6 +714,8 @@ pfsync_state_import(struct pfsync_state 
 	int pool_flags;
 	int error;
 
+	PF_LOCK_ASSERT();
+
 #ifdef __FreeBSD__
 	if (sp->creatorid == 0 && V_pf_status.debug >= PF_DEBUG_MISC) {
 #else
@@ -760,7 +762,7 @@ pfsync_state_import(struct pfsync_state 
 	if (flags & PFSYNC_SI_IOCTL)
 		pool_flags = PR_WAITOK | PR_ZERO;
 	else
-		pool_flags = PR_ZERO;
+		pool_flags = PR_NOWAIT | PR_ZERO;
 
 	if ((st = pool_get(&V_pf_state_pl, pool_flags)) == NULL)
 		goto cleanup;
@@ -854,7 +856,11 @@ pfsync_state_import(struct pfsync_state 
 		CLR(st->state_flags, PFSTATE_NOSYNC);
 		if (ISSET(st->state_flags, PFSTATE_ACK)) {
 			pfsync_q_ins(st, PFSYNC_S_IACK);
+#ifdef __FreeBSD__
+			pfsync_sendout();
+#else
 			schednetisr(NETISR_PFSYNC);
+#endif
 		}
 	}
 	CLR(st->state_flags, PFSTATE_ACK);
@@ -1310,7 +1316,11 @@ pfsync_in_upd(struct pfsync_pkt *pkt, st
 			V_pfsyncstats.pfsyncs_stale++;
 
 			pfsync_update_state(st);
+#ifdef __FreeBSD__
+			pfsync_sendout();
+#else
 			schednetisr(NETISR_PFSYNC);
+#endif
 			continue;
 		}
 		pfsync_alloc_scrub_memory(&sp->dst, &st->dst);
@@ -1416,7 +1426,11 @@ pfsync_in_upd_c(struct pfsync_pkt *pkt, 
 			V_pfsyncstats.pfsyncs_stale++;
 
 			pfsync_update_state(st);
+#ifdef __FreeBSD__
+			pfsync_sendout();
+#else
 			schednetisr(NETISR_PFSYNC);
+#endif
 			continue;
 		}
 		pfsync_alloc_scrub_memory(&up->dst, &st->dst);
@@ -1469,7 +1483,9 @@ pfsync_in_ureq(struct pfsync_pkt *pkt, s
 			if (ISSET(st->state_flags, PFSTATE_NOSYNC))
 				continue;
 
+			PF_LOCK();
 			pfsync_update_state_req(st);
+			PF_UNLOCK();
 		}
 	}
 
@@ -1558,7 +1574,7 @@ pfsync_in_del_c(struct pfsync_pkt *pkt, 
 		pf_unlink_state(st);
 	}
 #ifdef __FreeBSD__
-	PF_LOCK();
+	PF_UNLOCK();
 #endif
 	splx(s);
 
@@ -1955,7 +1971,11 @@ pfsyncioctl(struct ifnet *ifp, u_long cm
 		ip->ip_hl = sizeof(sc->sc_template) >> 2;
 		ip->ip_tos = IPTOS_LOWDELAY;
 		/* len and id are set later */
+#ifdef __FreeBSD__
+		ip->ip_off = IP_DF;
+#else
 		ip->ip_off = htons(IP_DF);
+#endif
 		ip->ip_ttl = PFSYNC_DFLTTL;
 		ip->ip_p = IPPROTO_PFSYNC;
 		ip->ip_src.s_addr = INADDR_ANY;
@@ -1986,8 +2006,8 @@ pfsyncioctl(struct ifnet *ifp, u_long cm
 #endif
 				printf("pfsync: requesting bulk update\n");
 #ifdef __FreeBSD__
-				callout_reset(&sc->sc_bulkfail_tmo, 5 * hz,
-				    pfsync_bulk_fail, V_pfsyncif);
+			callout_reset(&sc->sc_bulkfail_tmo, 5 * hz,
+			    pfsync_bulk_fail, V_pfsyncif);
 #else
 			timeout_add_sec(&sc->sc_bulkfail_tmo, 5);
 #endif
@@ -2138,12 +2158,13 @@ pfsync_sendout(void)
 #endif
 #ifdef __FreeBSD__
 	size_t pktlen;
+	int dummy_error;
 #endif
 	int offset;
 	int q, count = 0;
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_NET);
 #endif
@@ -2207,7 +2228,11 @@ pfsync_sendout(void)
 	bcopy(&sc->sc_template, ip, sizeof(*ip));
 	offset = sizeof(*ip);
 
+#ifdef __FreeBSD__
+	ip->ip_len = m->m_pkthdr.len;
+#else
 	ip->ip_len = htons(m->m_pkthdr.len);
+#endif
 	ip->ip_id = htons(ip_randomid());
 
 	/* build the pfsync header */
@@ -2337,35 +2362,22 @@ pfsync_sendout(void)
 #ifdef __FreeBSD__
 	sc->sc_ifp->if_opackets++;
 	sc->sc_ifp->if_obytes += m->m_pkthdr.len;
+	sc->sc_len = PFSYNC_MINPKT;
+
+	IFQ_ENQUEUE(&sc->sc_ifp->if_snd, m, dummy_error);
+	schednetisr(NETISR_PFSYNC);
 #else
 	sc->sc_if.if_opackets++;
 	sc->sc_if.if_obytes += m->m_pkthdr.len;
-#endif
 
-#ifdef __FreeBSD__
-	PF_UNLOCK();
-#endif
 	if (ip_output(m, NULL, NULL, IP_RAWOUTPUT, &sc->sc_imo, NULL) == 0)
-#ifdef __FreeBSD__
-	{
-		PF_LOCK();
-#endif
-		V_pfsyncstats.pfsyncs_opackets++;
-#ifdef __FreeBSD__
-	}
-#endif
+		pfsyncstats.pfsyncs_opackets++;
 	else
-#ifdef __FreeBSD__
-	{
-		PF_LOCK();
-#endif
-		V_pfsyncstats.pfsyncs_oerrors++;
-#ifdef __FreeBSD__
-	}
-#endif
+		pfsyncstats.pfsyncs_oerrors++;
 
 	/* start again */
 	sc->sc_len = PFSYNC_MINPKT;
+#endif
 }
 
 void
@@ -2378,7 +2390,7 @@ pfsync_insert_state(struct pf_state *st)
 #endif
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2412,7 +2424,11 @@ pfsync_insert_state(struct pf_state *st)
 	pfsync_q_ins(st, PFSYNC_S_INS);
 
 	if (ISSET(st->state_flags, PFSTATE_ACK))
+#ifdef __FreeBSD__
+		pfsync_sendout();
+#else
 		schednetisr(NETISR_PFSYNC);
+#endif
 	else
 		st->sync_updates = 0;
 }
@@ -2430,7 +2446,7 @@ pfsync_defer(struct pf_state *st, struct
 	struct pfsync_deferral *pd;
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2477,7 +2493,7 @@ pfsync_undefer(struct pfsync_deferral *p
 	int s;
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2560,7 +2576,7 @@ pfsync_update_state(struct pf_state *st)
 	int sync = 0;
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2609,7 +2625,11 @@ pfsync_update_state(struct pf_state *st)
 
 	if (sync || (time_second - st->pfsync_time) < 2) {
 		pfsync_upds++;
+#ifdef __FreeBSD__
+		pfsync_sendout();
+#else
 		schednetisr(NETISR_PFSYNC);
+#endif
 	}
 }
 
@@ -2625,6 +2645,8 @@ pfsync_request_update(u_int32_t creatori
 	size_t nlen = sizeof(struct pfsync_upd_req);
 	int s;
 
+	PF_LOCK_ASSERT();
+
 	/*
 	 * this code does nothing to prevent multiple update requests for the
 	 * same state being generated.
@@ -2658,7 +2680,11 @@ pfsync_request_update(u_int32_t creatori
 	TAILQ_INSERT_TAIL(&sc->sc_upd_req_list, item, ur_entry);
 	sc->sc_len += nlen;
 
+#ifdef __FreeBSD__
+	pfsync_sendout();
+#else
 	schednetisr(NETISR_PFSYNC);
+#endif
 }
 
 void
@@ -2670,6 +2696,8 @@ pfsync_update_state_req(struct pf_state 
 	struct pfsync_softc *sc = pfsyncif;
 #endif
 
+	PF_LOCK_ASSERT();
+
 	if (sc == NULL)
 		panic("pfsync_update_state_req: nonexistant instance");
 
@@ -2685,7 +2713,11 @@ pfsync_update_state_req(struct pf_state 
 		pfsync_q_del(st);
 	case PFSYNC_S_NONE:
 		pfsync_q_ins(st, PFSYNC_S_UPD);
+#ifdef __FreeBSD__
+		pfsync_sendout();
+#else
 		schednetisr(NETISR_PFSYNC);
+#endif
 		return;
 
 	case PFSYNC_S_INS:
@@ -2710,7 +2742,7 @@ pfsync_delete_state(struct pf_state *st)
 #endif
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2771,7 +2803,7 @@ pfsync_clear_states(u_int32_t creatorid,
 #endif
 
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #else
 	splassert(IPL_SOFTNET);
 #endif
@@ -2801,6 +2833,8 @@ pfsync_q_ins(struct pf_state *st, int q)
 	size_t nlen = pfsync_qs[q].len;
 	int s;
 
+	PF_LOCK_ASSERT();
+
 #ifdef __FreeBSD__
 	KASSERT(st->sync_state == PFSYNC_S_NONE,
 		("%s: st->sync_state == PFSYNC_S_NONE", __FUNCTION__));
@@ -2825,13 +2859,7 @@ pfsync_q_ins(struct pf_state *st, int q)
 	if (sc->sc_len + nlen > sc->sc_if.if_mtu) {
 #endif
 		s = splnet();
-#ifdef __FreeBSD__
-		PF_LOCK();
-#endif
 		pfsync_sendout();
-#ifdef __FreeBSD__
-		PF_UNLOCK();
-#endif
 		splx(s);
 
 		nlen = sizeof(struct pfsync_subheader) + pfsync_qs[q].len;
@@ -2888,7 +2916,9 @@ pfsync_update_tdb(struct tdb *t, int out
 
 		if (sc->sc_len + nlen > sc->sc_if.if_mtu) {
 			s = splnet();
+			PF_LOCK();
 			pfsync_sendout();
+			PF_UNLOCK();
 			splx(s);
 
 			nlen = sizeof(struct pfsync_subheader) +
@@ -2974,25 +3004,37 @@ pfsync_bulk_start(void)
 	struct pfsync_softc *sc = pfsyncif;
 #endif
 
-	sc->sc_ureq_received = time_uptime;
-
-	if (sc->sc_bulk_next == NULL)
 #ifdef __FreeBSD__
-		sc->sc_bulk_next = TAILQ_FIRST(&V_state_list);
+	if (V_pf_status.debug >= PF_DEBUG_MISC)
 #else
-		sc->sc_bulk_next = TAILQ_FIRST(&state_list);
+	if (pf_status.debug >= PF_DEBUG_MISC)
 #endif
-	sc->sc_bulk_last = sc->sc_bulk_next;
+		printf("pfsync: received bulk update request\n");
 
 #ifdef __FreeBSD__
-	if (V_pf_status.debug >= PF_DEBUG_MISC)
+	PF_LOCK();
+	if (TAILQ_EMPTY(&V_state_list))
 #else
-	if (pf_status.debug >= PF_DEBUG_MISC)
+	if (TAILQ_EMPTY(&state_list))
 #endif
-		printf("pfsync: received bulk update request\n");
+		pfsync_bulk_status(PFSYNC_BUS_END);
+	else {
+		sc->sc_ureq_received = time_uptime;
+		if (sc->sc_bulk_next == NULL)
+#ifdef __FreeBSD__
+			sc->sc_bulk_next = TAILQ_FIRST(&V_state_list);
+#else
+			sc->sc_bulk_next = TAILQ_FIRST(&state_list);
+#endif
+			sc->sc_bulk_last = sc->sc_bulk_next;
 
-	pfsync_bulk_status(PFSYNC_BUS_START);
-	pfsync_bulk_update(sc);
+			pfsync_bulk_status(PFSYNC_BUS_START);
+			callout_reset(&sc->sc_bulk_tmo, 1,
+			    pfsync_bulk_update, sc);
+	}
+#ifdef __FreeBSD__
+	PF_UNLOCK();
+#endif
 }
 
 void
@@ -3003,12 +3045,13 @@ pfsync_bulk_update(void *arg)
 	int i = 0;
 	int s;
 
+	PF_LOCK_ASSERT();
+
 	s = splsoftnet();
 #ifdef __FreeBSD__
 	CURVNET_SET(sc->sc_ifp->if_vnet);
-	PF_LOCK();
 #endif
-	do {
+	for (;;) {
 		if (st->sync_state == PFSYNC_S_NONE &&
 		    st->timeout < PFTM_MAX &&
 		    st->pfsync_time <= sc->sc_ureq_received) {
@@ -3024,26 +3067,33 @@ pfsync_bulk_update(void *arg)
 			st = TAILQ_FIRST(&state_list);
 #endif
 
-		if (i > 0 && TAILQ_EMPTY(&sc->sc_qs[PFSYNC_S_UPD])) {
+		if (st == sc->sc_bulk_last) {
+			/* we're done */
+			sc->sc_bulk_next = NULL;
+			sc->sc_bulk_last = NULL;
+			pfsync_bulk_status(PFSYNC_BUS_END);
+			break;
+		}
+
+#ifdef __FreeBSD__
+		if (i > 1 && (sc->sc_ifp->if_mtu - sc->sc_len) <
+#else
+		if (i > 1 && (sc->sc_if.if_mtu - sc->sc_len) <
+#endif
+		    sizeof(struct pfsync_state)) {
+			/* we've filled a packet */
 			sc->sc_bulk_next = st;
 #ifdef __FreeBSD__
 			callout_reset(&sc->sc_bulk_tmo, 1,
-			    pfsync_bulk_fail, sc);
+			    pfsync_bulk_update, sc);
 #else
 			timeout_add(&sc->sc_bulk_tmo, 1);
 #endif
-			goto out;
+			break;
 		}
-	} while (st != sc->sc_bulk_last);
-
-	/* we're done */
-	sc->sc_bulk_next = NULL;
-	sc->sc_bulk_last = NULL;
-	pfsync_bulk_status(PFSYNC_BUS_END);
+	}
 
-out:
 #ifdef __FreeBSD__
-	PF_UNLOCK();
 	CURVNET_RESTORE();
 #endif
 	splx(s);
@@ -3063,6 +3113,8 @@ pfsync_bulk_status(u_int8_t status)
 	struct pfsync_softc *sc = pfsyncif;
 #endif
 
+	PF_LOCK_ASSERT();
+
 	bzero(&r, sizeof(r));
 
 	r.subh.action = PFSYNC_ACT_BUS;
@@ -3096,7 +3148,9 @@ pfsync_bulk_fail(void *arg)
 #else
 		timeout_add_sec(&sc->sc_bulkfail_tmo, 5);
 #endif
+		PF_LOCK();
 		pfsync_request_update(0, 0);
+		PF_UNLOCK();
 	} else {
 		/* Pretend like the transfer was ok */
 		sc->sc_ureq_sent = 0;
@@ -3139,19 +3193,15 @@ pfsync_send_plus(void *plus, size_t plus
 #endif
 	int s;
 
+	PF_LOCK_ASSERT();
+
 #ifdef __FreeBSD__
 	if (sc->sc_len + pluslen > sc->sc_ifp->if_mtu) {
 #else
 	if (sc->sc_len + pluslen > sc->sc_if.if_mtu) {
 #endif
 		s = splnet();
-#ifdef __FreeBSD__
-		PF_LOCK();
-#endif
 		pfsync_sendout();
-#ifdef __FreeBSD__
-		PF_UNLOCK();
-#endif
 		splx(s);
 	}
 
@@ -3159,13 +3209,7 @@ pfsync_send_plus(void *plus, size_t plus
 	sc->sc_len += (sc->sc_pluslen = pluslen);
 
 	s = splnet();
-#ifdef __FreeBSD__
-	PF_LOCK();
-#endif
 	pfsync_sendout();
-#ifdef __FreeBSD__
-	PF_UNLOCK();
-#endif
 	splx(s);
 }
 
@@ -3200,13 +3244,12 @@ pfsync_state_in_use(struct pf_state *st)
 	if (sc == NULL)
 		return (0);
 
-	if (st->sync_state != PFSYNC_S_NONE)
+	if (st->sync_state != PFSYNC_S_NONE ||
+	    st == sc->sc_bulk_next ||
+	    st == sc->sc_bulk_last)
 		return (1);
 
-	if (sc->sc_bulk_next == NULL && sc->sc_bulk_last == NULL)
-		return (0);
-
-	return (1);
+	return (0);
 }
 
 u_int pfsync_ints;
@@ -3245,37 +3288,38 @@ pfsync_timeout(void *arg)
 void
 #ifdef __FreeBSD__
 pfsyncintr(void *arg)
+{
+	struct pfsync_softc *sc = arg;
+	struct mbuf *m;
+
+	CURVNET_SET(sc->sc_ifp->if_vnet);
+	pfsync_ints++;
+
+	for (;;) {
+		IF_DEQUEUE(&sc->sc_ifp->if_snd, m);
+		if (m == 0)
+			break;
+
+		if (ip_output(m, NULL, NULL, IP_RAWOUTPUT, &sc->sc_imo, NULL)
+		    == 0)
+			V_pfsyncstats.pfsyncs_opackets++;
+		else
+			V_pfsyncstats.pfsyncs_oerrors++;
+	}
+	CURVNET_RESTORE();
+}
 #else
 pfsyncintr(void)
-#endif
 {
-#ifdef __FreeBSD__
-	struct pfsync_softc *sc = arg;
-#endif
 	int s;
 
-#ifdef __FreeBSD__
-	if (sc == NULL)
-		return;
-
-	CURVNET_SET(sc->sc_ifp->if_vnet);
-#endif
 	pfsync_ints++;
 
 	s = splnet();
-#ifdef __FreeBSD__
-	PF_LOCK();
-#endif
 	pfsync_sendout();
-#ifdef __FreeBSD__
-	PF_UNLOCK();
-#endif
 	splx(s);
-
-#ifdef __FreeBSD__
-	CURVNET_RESTORE();
-#endif
 }
+#endif
 
 int
 pfsync_sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp,
@@ -3380,7 +3424,7 @@ vnet_pfsync_uninit(const void *unused)
 }
 
 /* Define startup order. */
-#define	PFSYNC_SYSINIT_ORDER	SI_SUB_PROTO_BEGIN
+#define	PFSYNC_SYSINIT_ORDER	SI_SUB_PROTO_IF
 #define	PFSYNC_MODEVENT_ORDER	(SI_ORDER_FIRST) /* On boot slot in here. */
 #define	PFSYNC_VNET_ORDER	(PFSYNC_MODEVENT_ORDER + 2) /* Later still. */
 
@@ -3430,7 +3474,7 @@ static moduledata_t pfsync_mod = {
 
 #define PFSYNC_MODVER 1
 
-DECLARE_MODULE(pfsync, pfsync_mod, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_ANY);
+DECLARE_MODULE(pfsync, pfsync_mod, SI_SUB_PSEUDO, SI_ORDER_ANY);
 MODULE_VERSION(pfsync, PFSYNC_MODVER);
 MODULE_DEPEND(pfsync, pf, PF_MODVER, PF_MODVER, PF_MODVER);
 #endif /* __FreeBSD__ */

Modified: stable/9/sys/contrib/pf/net/pf_ioctl.c
==============================================================================
--- stable/9/sys/contrib/pf/net/pf_ioctl.c	Wed Oct 26 17:04:26 2011	(r226800)
+++ stable/9/sys/contrib/pf/net/pf_ioctl.c	Wed Oct 26 17:09:09 2011	(r226801)
@@ -266,7 +266,7 @@ static struct cdevsw pf_cdevsw = {
 static volatile VNET_DEFINE(int, pf_pfil_hooked);
 #define V_pf_pfil_hooked	VNET(pf_pfil_hooked)
 VNET_DEFINE(int,		pf_end_threads);
-VNET_DEFINE(struct mtx,		pf_task_mtx);
+struct mtx			pf_task_mtx;
 
 /* pfsync */
 pfsync_state_import_t 		*pfsync_state_import_ptr = NULL;
@@ -287,18 +287,18 @@ SYSCTL_VNET_INT(_debug, OID_AUTO, pfugid
 	&VNET_NAME(debug_pfugidhack), 0,
 	"Enable/disable pf user/group rules mpsafe hack");
 
-void
+static void
 init_pf_mutex(void)
 {
 
-	mtx_init(&V_pf_task_mtx, "pf task mtx", NULL, MTX_DEF);
+	mtx_init(&pf_task_mtx, "pf task mtx", NULL, MTX_DEF);
 }
 
-void
+static void
 destroy_pf_mutex(void)
 {
 
-	mtx_destroy(&V_pf_task_mtx);
+	mtx_destroy(&pf_task_mtx);
 }
 void
 init_zone_var(void)
@@ -4259,7 +4259,7 @@ hook_pf(void)
 	struct pfil_head *pfh_inet6;
 #endif
 
-	PF_ASSERT(MA_NOTOWNED);
+	PF_UNLOCK_ASSERT();
 
 	if (V_pf_pfil_hooked)
 		return (0); 
@@ -4300,7 +4300,7 @@ dehook_pf(void)
 	struct pfil_head *pfh_inet6;
 #endif
 
-	PF_ASSERT(MA_NOTOWNED);
+	PF_UNLOCK_ASSERT();
 
 	if (V_pf_pfil_hooked == 0)
 		return (0);
@@ -4381,11 +4381,8 @@ pf_load(void)
 
 	init_zone_var();
 	sx_init(&V_pf_consistency_lock, "pf_statetbl_lock");
-	init_pf_mutex();
-	if (pfattach() < 0) {
-		destroy_pf_mutex();
+	if (pfattach() < 0)
 		return (ENOMEM);
-	}
 
 	return (0);
 }
@@ -4413,14 +4410,13 @@ pf_unload(void)
 	V_pf_end_threads = 1;
 	while (V_pf_end_threads < 2) {
 		wakeup_one(pf_purge_thread);
-		msleep(pf_purge_thread, &V_pf_task_mtx, 0, "pftmo", hz);
+		msleep(pf_purge_thread, &pf_task_mtx, 0, "pftmo", hz);
 	}
 	pfi_cleanup();
 	pf_osfp_flush();
 	pf_osfp_cleanup();
 	cleanup_pf_zone();
 	PF_UNLOCK();
-	destroy_pf_mutex();
 	sx_destroy(&V_pf_consistency_lock);
 	return error;
 }
@@ -4432,10 +4428,12 @@ pf_modevent(module_t mod, int type, void
 
 	switch(type) {
 	case MOD_LOAD:
+		init_pf_mutex();
 		pf_dev = make_dev(&pf_cdevsw, 0, 0, 0, 0600, PF_NAME);
 		break;
 	case MOD_UNLOAD:
 		destroy_dev(pf_dev);
+		destroy_pf_mutex();
 		break;
 	default:
 		error = EINVAL;
@@ -4450,6 +4448,6 @@ static moduledata_t pf_mod = {
 	0
 };
 
-DECLARE_MODULE(pf, pf_mod, SI_SUB_PROTO_IFATTACHDOMAIN, SI_ORDER_FIRST);
+DECLARE_MODULE(pf, pf_mod, SI_SUB_PSEUDO, SI_ORDER_FIRST);
 MODULE_VERSION(pf, PF_MODVER);
 #endif /* __FreeBSD__ */

Modified: stable/9/sys/contrib/pf/net/pf_table.c
==============================================================================
--- stable/9/sys/contrib/pf/net/pf_table.c	Wed Oct 26 17:04:26 2011	(r226800)
+++ stable/9/sys/contrib/pf/net/pf_table.c	Wed Oct 26 17:09:09 2011	(r226801)
@@ -906,7 +906,7 @@ pfr_lookup_addr(struct pfr_ktable *kt, s
 		pfr_prepare_network(&mask, ad->pfra_af, ad->pfra_net);
 		s = splsoftnet(); /* rn_lookup makes use of globals */
 #ifdef __FreeBSD__
-		PF_ASSERT(MA_OWNED);
+		PF_LOCK_ASSERT();
 #endif
 		ke = (struct pfr_kentry *)rn_lookup(&sa, &mask, head);
 		splx(s);
@@ -1127,7 +1127,7 @@ pfr_route_kentry(struct pfr_ktable *kt, 
 
 	s = splsoftnet();
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #endif
 	if (KENTRY_NETWORK(ke)) {
 		pfr_prepare_network(&mask, ke->pfrke_af, ke->pfrke_net);
@@ -1166,7 +1166,7 @@ pfr_unroute_kentry(struct pfr_ktable *kt
 
 	s = splsoftnet();
 #ifdef __FreeBSD__
-	PF_ASSERT(MA_OWNED);
+	PF_LOCK_ASSERT();
 #endif
 	if (KENTRY_NETWORK(ke)) {
 		pfr_prepare_network(&mask, ke->pfrke_af, ke->pfrke_net);

Modified: stable/9/sys/contrib/pf/net/pfvar.h
==============================================================================
--- stable/9/sys/contrib/pf/net/pfvar.h	Wed Oct 26 17:04:26 2011	(r226800)
+++ stable/9/sys/contrib/pf/net/pfvar.h	Wed Oct 26 17:09:09 2011	(r226801)
@@ -222,7 +222,7 @@ struct pfi_dynaddr {
 #define	PF_NAME		"pf"
 
 #define	PR_NOWAIT	M_NOWAIT
-#define	PR_WAITOK	M_WAIT
+#define	PR_WAITOK	M_WAITOK
 #define	PR_ZERO		M_ZERO
 #define	pool_get(p, f)	uma_zalloc(*(p), (f))
 #define	pool_put(p, o)	uma_zfree(*(p), (o))
@@ -237,33 +237,25 @@ struct pfi_dynaddr {
 		uma_zdestroy(var)
 
 #ifdef __FreeBSD__
-VNET_DECLARE(struct mtx,	 pf_task_mtx);
-#define	V_pf_task_mtx		 VNET(pf_task_mtx)
-
-#define	PF_ASSERT(h)	mtx_assert(&V_pf_task_mtx, (h))
-
-#define	PF_LOCK()	do {				\
-	PF_ASSERT(MA_NOTOWNED);				\
-	mtx_lock(&V_pf_task_mtx);			\
-} while(0)
-#define	PF_UNLOCK()	do {				\
-	PF_ASSERT(MA_OWNED);				\
-	mtx_unlock(&V_pf_task_mtx);			\
-} while(0)
-#else
 extern struct mtx pf_task_mtx;
 
-#define	PF_ASSERT(h)	mtx_assert(&pf_task_mtx, (h))
+#define	PF_LOCK_ASSERT()	mtx_assert(&pf_task_mtx, MA_OWNED)
+#define	PF_UNLOCK_ASSERT()	mtx_assert(&pf_task_mtx, MA_NOTOWNED)
 
 #define	PF_LOCK()	do {				\
-	PF_ASSERT(MA_NOTOWNED);				\
+	PF_UNLOCK_ASSERT();				\
 	mtx_lock(&pf_task_mtx);				\
 } while(0)
 #define	PF_UNLOCK()	do {				\
-	PF_ASSERT(MA_OWNED);				\
+	PF_LOCK_ASSERT();				\
 	mtx_unlock(&pf_task_mtx);			\
 } while(0)
-#endif
+#else
+#define	PF_LOCK_ASSERT()
+#define	PF_UNLOCK_ASSERT()
+#define	PF_LOCK()
+#define	PF_UNLOCK()
+#endif /* __FreeBSD__ */
 
 #define	PF_COPYIN(uaddr, kaddr, len, r)		do {	\
 	PF_UNLOCK();					\
@@ -277,9 +269,6 @@ extern struct mtx pf_task_mtx;
 	PF_LOCK();					\
 } while(0)
 
-extern void init_pf_mutex(void);
-extern void destroy_pf_mutex(void);
-
 #define	PF_MODVER	1
 #define	PFLOG_MODVER	1
 #define	PFSYNC_MODVER	1


More information about the svn-src-stable-9 mailing list