tinc and IPv6 routing, or: how to set up a local IPv6

Niklaas Baudet von Gersdorff stdin at niklaas.eu
Thu May 19 12:44:50 UTC 2016


Hello,

in case this is something obvious, please bear with me. I am not
a professional, it's just my hobby to play around with computers.

I am trying to set up a tinc VPN that connects two servers. In fact, the
VPN is working for IPv4, but I cannot get it work for IPv6. Because of
this, I assume it's rather a routing problem with IPv6 than a problem with
tinc. To be honest, I don't have any experience setting up a local IPv6;
so I guess that I'm doing something wrong here.

The main aim is to connect several jails that are running on each of the
machines. They are in IPv4 networks 10.1.0.0/16 (machine A) and 10.2.0.0/16
(machine B), and in IPv6 networks fd16:dcc0:f4cc:0:0:1::/96 (machine A) and
fd16:dcc0:f4cc:0:0:2::/96 (machine B) respectively. Both on lo1.

With the current configuration (see below) I end up with the following
routing tables:

	A $ netstat -rn | grep -e 'fd16' -e '10\.'
    10.0.0.0/8         link#4             U          tap0
    10.1.0.1           link#4             UHS         lo0
    10.1.1.1           link#3             UH          lo1
    10.2.0.0/16        10.1.0.1           UGS        tap0
    10.2.0.1           10.1.0.1           UGHS       tap0
    fd16:dcc0:f4cc::/80               link#4                        U          tap0
    fd16:dcc0:f4cc::1:0:0/96          link#3                        U           lo1
    fd16:dcc0:f4cc::1:0:1             link#4                        UHS         lo0
    fd16:dcc0:f4cc::1:1:1             link#3                        UHS         lo0
    fd16:dcc0:f4cc::2:0:0/96          fd16:dcc0:f4cc::1:0:1         UGS         lo1
    fd16:dcc0:f4cc::2:0:1             fd16:dcc0:f4cc::1:0:1         UGHS        lo1
    ff01::%lo1/32                     fd16:dcc0:f4cc::1:1:1         U           lo1
    ff01::%tap0/32                    fd16:dcc0:f4cc::1:0:1         U          tap0
    ff02::%lo1/32                     fd16:dcc0:f4cc::1:1:1         U           lo1
    ff02::%tap0/32                    fd16:dcc0:f4cc::1:0:1         U          tap0


	B $ netstat -rn | grep -e 'fd16' -e '10\.'
	10.0.0.0/8         link#4             U          tap0
	10.1.0.0/16        10.2.0.1           UGS        tap0
	10.1.0.1           10.2.0.1           UGHS       tap0
	10.2.0.1           link#4             UHS         lo0
	10.2.1.1           link#3             UH          lo1
	fd16:dcc0:f4cc::/80               link#4                        U          tap0
	fd16:dcc0:f4cc::1:0:0/96          fd16:dcc0:f4cc::2:0:1         UGS         lo1
	fd16:dcc0:f4cc::1:0:1             fd16:dcc0:f4cc::2:0:1         UGHS        lo1
	fd16:dcc0:f4cc::2:0:0/96          link#3                        U           lo1
	fd16:dcc0:f4cc::2:0:1             link#4                        UHS         lo0
	fd16:dcc0:f4cc::2:1:1             link#3                        UHS         lo0
	ff01::%lo1/32                     fd16:dcc0:f4cc::2:1:1         U           lo1
	ff01::%tap0/32                    fd16:dcc0:f4cc::2:0:1         U          tap0
	ff02::%lo1/32                     fd16:dcc0:f4cc::2:1:1         U           lo1
	ff02::%tap0/32                    fd16:dcc0:f4cc::2:0:1         U          tap0

Note: 10.{1,2}.1.1 are two jails running on machine A and B respectively.
These jails have also assigned IPv6 addresses fd16:dcc0:f4cc::{1,2}:1:1
respectively. 10.{1,2}.0.1 and fd16:dcc0:f4cc::{1,2}:0:1 are manually
assigned because tinc's documentation asks you to do so, see configuration
below.

So, on both machines I can `ping 10.{1,2}.{0,1}.1` and I get a response.
Obviously, depending on where I ping from the responses I get take some ms
more time since they come from the other machine. But if I `ping6
fd16:dcc0:f4cc::{1,2}:{0,1}:1` I only get a response from the machine the
ping6 originates from; that is, routing over the VPN seems not to work for
IPv6. For example, see the following outputs:

	A $ ping -c 10 10.1.0.1
	PING 10.1.0.1 (10.1.0.1): 56 data bytes
	64 bytes from 10.1.0.1: icmp_seq=0 ttl=64 time=0.038 ms
	64 bytes from 10.1.0.1: icmp_seq=1 ttl=64 time=0.038 ms
	64 bytes from 10.1.0.1: icmp_seq=2 ttl=64 time=0.046 ms
	64 bytes from 10.1.0.1: icmp_seq=3 ttl=64 time=0.089 ms
	64 bytes from 10.1.0.1: icmp_seq=4 ttl=64 time=0.075 ms
	64 bytes from 10.1.0.1: icmp_seq=5 ttl=64 time=0.057 ms
	64 bytes from 10.1.0.1: icmp_seq=6 ttl=64 time=0.046 ms
	64 bytes from 10.1.0.1: icmp_seq=7 ttl=64 time=0.051 ms
	64 bytes from 10.1.0.1: icmp_seq=8 ttl=64 time=0.045 ms
	64 bytes from 10.1.0.1: icmp_seq=9 ttl=64 time=0.050 ms

	--- 10.1.0.1 ping statistics ---
	10 packets transmitted, 10 packets received, 0.0% packet loss
	round-trip min/avg/max/stddev = 0.038/0.053/0.089/0.016 ms


	A $ ping -c 10 10.2.0.1
	PING 10.2.0.1 (10.2.0.1): 56 data bytes
	64 bytes from 10.2.0.1: icmp_seq=0 ttl=64 time=8.200 ms
	64 bytes from 10.2.0.1: icmp_seq=1 ttl=64 time=7.846 ms
	64 bytes from 10.2.0.1: icmp_seq=2 ttl=64 time=7.881 ms
	64 bytes from 10.2.0.1: icmp_seq=3 ttl=64 time=7.652 ms
	64 bytes from 10.2.0.1: icmp_seq=4 ttl=64 time=7.874 ms
	64 bytes from 10.2.0.1: icmp_seq=5 ttl=64 time=7.876 ms
	64 bytes from 10.2.0.1: icmp_seq=6 ttl=64 time=7.694 ms
	64 bytes from 10.2.0.1: icmp_seq=7 ttl=64 time=7.893 ms
	64 bytes from 10.2.0.1: icmp_seq=8 ttl=64 time=8.519 ms
	64 bytes from 10.2.0.1: icmp_seq=9 ttl=64 time=8.129 ms

	--- 10.2.0.1 ping statistics ---
	10 packets transmitted, 10 packets received, 0.0% packet loss
	round-trip min/avg/max/stddev = 7.652/7.956/8.519/0.245 ms


	A $ ping6 -c 10 fd16:dcc0:f4cc::1:0:1
	PING6(56=40+8+8 bytes) fd16:dcc0:f4cc::1:0:1 --> fd16:dcc0:f4cc::1:0:1
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=0 hlim=64 time=0.099 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=1 hlim=64 time=0.069 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=2 hlim=64 time=0.135 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=3 hlim=64 time=0.070 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=4 hlim=64 time=0.108 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=5 hlim=64 time=0.079 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=6 hlim=64 time=0.102 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=7 hlim=64 time=0.097 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=8 hlim=64 time=0.099 ms
	16 bytes from fd16:dcc0:f4cc::1:0:1, icmp_seq=9 hlim=64 time=0.092 ms

	--- fd16:dcc0:f4cc::1:0:1 ping6 statistics ---
	10 packets transmitted, 10 packets received, 0.0% packet loss
	round-trip min/avg/max/std-dev = 0.069/0.095/0.135/0.018 ms


	A $ ping6 -c 10 fd16:dcc0:f4cc::2:0:1
	PING6(56=40+8+8 bytes) fd16:dcc0:f4cc::1:1:1 --> fd16:dcc0:f4cc::2:0:1

	--- fd16:dcc0:f4cc::2:0:1 ping6 statistics ---
	10 packets transmitted, 0 packets received, 100.0% packet loss

The outputs look pretty the same on machine B -- just the other way
around. What is wrong with the IPv6 routing?

I have

    ipv6_gateway_enable="YES"

/etc/rc.conf. See also:

	A $ sysctl net.inet6.ip6.forwarding
	net.inet6.ip6.forwarding: 1


	B $ sysctl net.inet6.ip6.forwarding
	net.inet6.ip6.forwarding: 1

I don't think it's a firewall problem because I have

    set skip on { lo0 tap0 }

in /etc/pf.conf, and IPv4 VPN is working.

This is how the interfaces look like:

	A $ ifconfig tap0
    tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        ether 00:bd:6b:e5:19:00
        inet6 fd16:dcc0:f4cc::1:0:1 prefixlen 80 
        inet6 fe80::2bd:6bff:fee5:1900%tap0 prefixlen 64 scopeid 0x4 
        inet 10.1.0.1 netmask 0xff000000 broadcast 10.255.255.255 
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        Opened by PID 6110


	B $ ifconfig tap0
	tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=80000<LINKSTATE>
		ether 00:bd:60:ca:17:00
		inet6 fd16:dcc0:f4cc::2:0:1 prefixlen 80 
		inet6 fe80::2bd:60ff:feca:1700%tap0 prefixlen 64 scopeid 0x4 
		inet 10.2.0.1 netmask 0xff000000 broadcast 10.255.255.255 
		nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
		media: Ethernet autoselect
		status: active
		Opened by PID 16037

The following is the tinc-up script on each machine that assignes IP
addresses and creates routes. I commented out some variations that
I tried but haven't had success with either:

    A $ cat /usr/local/etc/tinc/klaas/tinc-up
    ifconfig $INTERFACE inet6 fd16:dcc0:f4cc:0:0:1:0:1 prefixlen 80
    route -6 add -host fd16:dcc0:f4cc:0:0:2:0:1 fd16:dcc0:f4cc:0:0:1:0:1
    route -6 add -net  fd16:dcc0:f4cc:0:0:2::/96  fd16:dcc0:f4cc:0:0:1:0:1
    #route -6 add -ifp $INTERFACE -host fd16:dcc0:f4cc::2:0:1    fd16:dcc0:f4cc::1:0:1
    #route -6 add -ifp $INTERFACE -net  fd16:dcc0:f4cc::2:0:0/96 fd16:dcc0:f4cc::1:0:1

    ifconfig $INTERFACE 10.1.0.1 netmask 255.0.0.0
    route -4 add -host 10.2.0.1    10.1.0.1
    route -4 add -net  10.2.0.0/16 10.1.0.1

Again, this looks pretty the same on machine B. $INTERFACE gets expanded
to the interface that is set in tinc.conf (as you can see below), that is,
tap0.

I tried the variants with explicitly setting `-ifp $INTERFACE` because
I realised that

                                                                                vvv
    fd16:dcc0:f4cc::1:0:0/96          link#3                        U           lo1

although

											         vvvv
    10.2.0.0/16        10.1.0.1           UGS        tap0

The explicit setting changes the first to tap0 but still I cannot ping the
other machine over the VPN. Whether routing for the IPv6 network is set on
lo1 or tap0 also depends on whether I start the jails or the tinc daemon
first. I don't know whether that is an important issue.

This is tinc.conf on machine A:

    Name = A
    ConnectTo = B
    BindToAddress = <public-ipv4>
    BindToAddress = <public-ipv6>
    Device = /dev/tap0

It looks pretty the same for machine B. Since the tinc daemons can
connect, I assume everything is set up correctly here.

This is the host configuration file for A:

	Address = A.domain.tld
	Subnet = fd16:dcc0:f4cc:0:0:1::/96
	Subnet = 10.1.0.0/16

	-----BEGIN RSA PUBLIC KEY-----
	<secret>
	-----END RSA PUBLIC KEY-----

Again, the configuration file for machine B looks pretty the same. Except
that the subnets are the ones mentioned above.

Last but not least, I am not sure whether I need to have rtadv running,
and if I have to, on which interface, lo1 or tap0? I tried to do so but
I get errors, and still couldn't ping the other side of the VPN:

	A $ cat /etc/rtadvd.conf
    tap0:\
        :addrs#1:addr="fd16:dcc0:f4cc:0:0::":prefixlen#80:tc=ether:

	A $ cat /etc/rc.conf
	rtadvd_enable="YES"
	rtadvd_interfaces="tap0"

    A $ grep rtadvd /var/log/messages
	May 19 10:36:18 A rtadvd[76279]: <getconfig> inet_pton failed for fd16:dcc0:f4cc:0:0:1:
	May 19 10:36:18 A rtadvd[76279]: <getconfig> inet_pton failed for fd16:dcc0:f4cc:0:0:1:
	May 19 10:36:34 A rtadvd[76279]: non-zero lifetime RA on RA receiving interface tap0.  Ignored.
	May 19 10:41:24 A rtadvd[77128]: <getconfig> inet_pton failed for fd16:dcc0:f4cc:0:0:1:
	May 19 10:41:40 A rtadvd[77128]: non-zero lifetime RA on RA receiving interface tap0.  Ignored.
	May 19 10:43:12 A rtadvd[77441]: <getconfig> inet_pton failed for fd16:dcc0:f4cc:0:0:1:
	May 19 10:43:28 A rtadvd[77441]: non-zero lifetime RA on RA receiving interface tap0.  Ignored.
	May 19 10:52:50 A rtadvd[77441]: non-zero lifetime RA on RA receiving interface tap0.  Ignored.
	May 19 12:19:19 A rtadvd[95216]: <getconfig> inet_pton failed for fd16:dcc0:f4cc:0:0:1:
	May 19 12:19:35 A rtadvd[95216]: non-zero lifetime RA on RA receiving interface tap0.  Ignored.

As said, I guess my IPv6 routing is broken because IPv4 works fine. But
I don't know where to look for mistakes and my knowledge is rather
superficial.

Any help is very much appreciated!

    Niklaas


More information about the freebsd-questions mailing list