[Bug 202351] [ip6] [panic] Kernel panic in ip6_forward (different from 128247, 131038)

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Mon Aug 24 13:20:47 UTC 2015


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202351

markus.gebert at hostpoint.ch changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |markus.gebert at hostpoint.ch

--- Comment #2 from markus.gebert at hostpoint.ch ---
I've been seeing the same panic since upgrading from 10.1 to 10.2. The problem
seems to be that the pf ipv6 reassembly/refragmentation code cannot properly
cope with multicast packets but if_bridge needs to broadcast that sort of
traffic while still passing it through the firewall.

Here's what I think happens in detail:

1. One of the bridge members gets an ipv6 multicast packet which was
reassembled by pf.
2. if_bridge broadcasts it to the other member(s).
3. if_brigde applies outbound filtering which refragments the packet
(forwarding the reassembled packet could cause MTU issues on ipv6, so
refragmentation is required).
4. Because multiple packets may result from this, pf injects all of them into
the ipv6 stack using ip6_forward() instead of passing a single packet back to
if_bridge for further processing.
5. ip6_forward() will refuse to handle multicast packets, because it was
written for unicast traffic.
6. Because somewhere along the road (my guess is in the pf
reassembly/refragment code) the mbuf->m_pkthdr.rcvif was lost, we see this
panic when ip6_forward() is trying to log that it will not forward this
multicast packet.

Even if we fix the call to log() and/or make sure that the rcvif is always set,
which will make the panic go away, ipv6 multicast will still not work together
with pf scrubbing and if_bridge.

My current workaround is to disable scrubbing on bridge members, because i
don't really need it there. Another approach might be to disable it (or at
least reassembly) just for ipv6 multicast traffic.

Short-term solution: I think pf should be fixed to not reassemble ipv6
multicast traffic, as long as it's unable to reinject that kind of traffic
properly after refragmentation.

While calling ip6_forward() in the pf refragmention code seems ok for normal
(forwarding) traffic, in the bridge case it does not make sense to me even for
unicast traffic. IMO bridged traffic should never be passed into the ipv6 stack
where it could be routed and even end up on interfaces which are not part of
the bridge. Not knowing the code very well, I'm not sure if this scenario would
really be possible, but I think that using ip6_forward() for bridged traffic
may call for trouble.

So in my opinion, as long as pf refragmentation for ipv6 works as it works
right now, pf should not reassemble packets that might somehow end up on a
bridged interface. But since reassembly happens on the inbound interface, it
seems hard to know wether a packet will ever end up on a bridged interface,
unless of course in the simplest case where the inbound interface itself is
part of a bridge. Ultimately it might be easier to just inform the user that in
case of ipv6, pf scrubbing is ok for forwarding traffic but might cause trouble
(and lead to unintended routing?) on bridges.

Of course the best solution seems to be for the firewall code to be able to
pass a list of packets back to the caller and let it decide wether to call into
the ip stack (forwarding case) or into the interface transmit code (bridging
case) and thereby eliminating the need to directly use ip6_forward() in the
first place. But the usage of ip6_forward() suggests that this is currently not
possible and probably something not easily changed. Again, I spent only a
limited amount of time to find out how all these parts interact, but I also
suspect that calling ip6_forward() could prevent a subsequent firewall from
denying these packets when using multiple firewalls, doesn't it?

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-net mailing list