[Bug 286322] IPv6 doesn't work across different FIBs (epair)

From: <bugzilla-noreply_at_freebsd.org>
Date: Thu, 24 Apr 2025 14:18:56 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=286322

            Bug ID: 286322
           Summary: IPv6 doesn't work across different FIBs (epair)
           Product: Base System
           Version: 14.2-STABLE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Many People
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: paige@paige.bio

FreeBSD stelleri.netcrave.network 14.2-RELEASE-p3 FreeBSD 14.2-RELEASE-p3
n269524-1eb03b059e56 STELLERI amd64

➜  stelleri ifconfig epair128 create
epair128a
➜  stelleri ifconfig epair128b inet6 fcff::b/64 fib 128
➜  stelleri ifconfig epair128a inet6 fcff::a/64
➜  stelleri ping -S fcff::a fcff::b
PING(56=40+8+8 bytes) fcff::a --> fcff::b
^C
--- fcff::b ping statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss
➜  stelleri 


If you move epair128b to FIB 0 it works: 

➜  stelleri ifconfig epair128b inet6 fcff::b/64 fib 0
➜  stelleri ping -S fcff::a fcff::b                  
PING(56=40+8+8 bytes) fcff::a --> fcff::b
16 bytes from fcff::b, icmp_seq=0 hlim=64 time=0.131 ms
16 bytes from fcff::b, icmp_seq=1 hlim=64 time=0.132 ms
^C
--- fcff::b ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.131/0.131/0.132/0.000 ms
➜  stelleri 


If you move it back to FIB 128 suddenly it works:

➜  stelleri ifconfig epair128b inet6 fcff::b/64 fib 128
➜  stelleri ping -S fcff::a fcff::b                    
PING(56=40+8+8 bytes) fcff::a --> fcff::b
16 bytes from fcff::b, icmp_seq=0 hlim=64 time=0.133 ms
16 bytes from fcff::b, icmp_seq=1 hlim=64 time=0.131 ms
^C
--- fcff::b ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.131/0.132/0.133/0.001 ms
➜  stelleri ifconfig epair128b
epair128b: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP>
metric 0 mtu 1500
        options=8<VLAN_MTU>
        ether 02:4b:3b:66:b3:0b
        inet6 fe80::4b:3bff:fe66:b30b%epair128b prefixlen 64 scopeid 0x23
        inet6 fcff::b prefixlen 64
        groups: epair
        fib: 128
        media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
        status: active
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
➜  stelleri 


And I would think because of this behavior that it must have something to do
with NDP, however looking at NDP both before after and after I couldn't tell
one way or another from looking at it's state: 

before: 

➜  stelleri ndp -a
Neighbor                             Linklayer Address  Netif Expire    S Flags
fe80::ac:22ff:fe52:c10a%epair128a    02:ac:22:52:c1:0a epair128a permanent R 
fcff::a                              02:ac:22:52:c1:0a epair128a permanent R 
fe80::ac:22ff:fe52:c10b%epair128b    02:ac:22:52:c1:0b epair128b permanent R 
fcff::b                              02:ac:22:52:c1:0b epair128b permanent R 

and after move to fib 0:
➜  stelleri ndp -a                                   
Neighbor                             Linklayer Address  Netif Expire    S Flags
fe80::ac:22ff:fe52:c10a%epair128a    02:ac:22:52:c1:0a epair128a permanent R 
fcff::a                              02:ac:22:52:c1:0a epair128a permanent R 
fe80::ac:22ff:fe52:c10b%epair128b    02:ac:22:52:c1:0b epair128b permanent R 
fcff::b                              02:ac:22:52:c1:0b epair128b permanent R 

and after move back: 
➜  stelleri ndp -a
Neighbor                             Linklayer Address  Netif Expire    S Flags
fe80::4b:3bff:fe66:b30a%epair128a    02:4b:3b:66:b3:0a epair128a permanent R 
fcff::a                              02:4b:3b:66:b3:0a epair128a permanent R 
fe80::4b:3bff:fe66:b30b%epair128b    02:4b:3b:66:b3:0b epair128b permanent R 
fcff::b                              02:4b:3b:66:b3:0b epair128b permanent R 

Sorry if this is filed in the wrong place, I'm not really sure where to file
it, and it might not be specific to AMD64 that's just what I have to go based
on (the machine that I have to test on is AMD64.)

-- 
You are receiving this mail because:
You are the assignee for the bug.