Re: Tunnel interfaces and vnet boundary crossing
- Reply: Milan Obuch : "Re: Tunnel interfaces and vnet boundary crossing"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 23 Aug 2022 15:06:00 UTC
Hello. This is my first email to this mailing list, so I hope it has been sent and formatted correctly. Regarding tun interfaces, I do something similar to M. Gmelin, however, I create the tunnel (tun, for openvpn) within the vnet jail. I think your question is not about tun interfaces but about gre, which I discuss near the end of this email. Regarding devfs: My /etc/devfs.rules contains the following ruleset, applicable to all such jails: [devfsrules_jail=5] add include $devfsrules_hide_all add include $devfsrules_unhide_basic add include $devfsrules_unhide_login add path bpf unhide add path tun* unhide add path bpf0 unhide This ruleset ID is specified during jail creation. If you are modifying this on a running system, I think you will need to use the devfs utility. This is how I get tun interfaces into a jail, for use with openvpn or otherwise. General background: All the vnet jails are to be connected to a bridge with epairs, and NAT is to be performed for jail traffic exchanged with the exterior.There is an additional vnet jail which handles ipsec; the ipsec jail connects the host's jail network (usually a /24) to other local networks. Each external local network's route is specified in the host's routing table, with a next-hop being the ipsec jail. I personally don't use ipsec interfaces, so in the ipsec jail the traffic is extracted by the kernel (per TSs that are configured with StrongSwan), encapsulated and sent out the host's external interface. I have no knowledge of whether this is better or worse than using an ipsec interface. Specific steps inside an openvpn jail: After an openvpn jail is started, the tunnel (tun interface) is created in the jail by running the openvpn daemon. I think tunnel numbers (appearing on the interface name) need to be globally distinct. I'm not sure if you needed any assistance with this. Regarding gre interfaces, I also create these within the jail. I have not ever had any problems with this. I don't know that a specific device is needed, so I don't know if devfs is involved here. Regarding the specific use-case, a gre interface is created in the vnet jail, and the corresponding other gre interface resides somewhere else on the WAN (in some other local network). Suppose we have vnet jail with epair address 192.168.1.10, on the host's 192.168.1.0/24 network. We want this jail to communicate with some other vnet jail (or some other unspecified host, such as a VM) over GRE, who has address 192.168.2.10/24. We choose local GRE addresses of 10.0.0.1 and 10.0.0.2, respectively, in a /30. jexec j1 ifconfig j1 gre0 create ifconfig j1 gre0 tunnel 192.168.1.10 192.168.2.10 ifconfig j1 gre0 inet 10.0.0.1/30 10.0.0.2 ifconfig j1 gre0 up j1 has a default route to the host's bridge, and the host has 192.168.2.0/24 set to route through the ipsec jail. The ipsec jail, upon receiving the traffic, encapuslates it per the TS, and it's sent to the appropriate host over ESP or ESP-in-UDP. NAT is performed by the host, since all jails including the ipsec jail have local addresses, so UDP encapsulation is typical. Naturally, the firewalls along the way will need to be configured to allow the gre traffic. If traffic is being exchanged across an openvpn tun interface in j1 from local networks such as 10.0.0.2, you would also probably want to configure NAT to take place there (within your jail), so that the other end of that tunnel only sees the tun interface address, not your local network. The end result is, among other things, hosts in 192.168.2.0/24 can set their default route to go through the openvpn tun (via their gre tunnel), which is what I understand your goal to be. Your openvpn jail should have its own default route set through that tun, with net.inet.ip.forwarding=1, etc. Please let me know if anything is unclear or if you would like more information. On 2022-08-17 17:16, Milan Obuch wrote: > On Wed, 17 Aug 2022 22:22:45 +0200 > Michael Gmelin <grembo@freebsd.org> wrote: > >> > On 15. Aug 2022, at 08:52, Milan Obuch <freebsd-net@dino.sk> wrote: >> > >> > ?Hi, >> > >> > some time ago I managed to design and implement multi-tenant OpenVPN >> > server using vnet jails. This way I am able to use more OpenVPN >> > instances on single public IP. >> > >> > This is made possible using tun/tap interface property allowing to >> > cross vnet boundary - here is part of my initialisation command >> > sequence for one instance: >> > >> > jail -c name=ov1 vnet persist >> > jexec ov1 hostname -s ov1 >> > jexec ov1 ifconfig lo0 127.0.0.1/8 >> > jexec ov1 sysctl net.inet.ip.forwarding=1 >> > ifconfig tun1 create vnet ov1 >> > /usr/local/sbin/openvpn --cd /usr/local/etc/openvpn --daemon ov1 >> > --config ov1.cfg --writepid /var/run/ov1.pid >> > >> > In ov1.cfg, relevant bits are >> > >> > port 1001 >> > management localhost 2001 >> > dev tun1 >> > >> > (Actual numbers are different, but important thing is how they >> > relate together.) >> > >> > This way, OpenVPN process runs in base vnet, using one side of >> > pre-created tun/tap interface, while networking uses the other side >> > of this interface in child vnet, isolated from base vnet (and other >> > OpenVPN instances as well). >> > >> > Presently, I am using vlan interfaces on one ethernet interface to >> > connect individual instances to their respective local network. I'd >> > like to replace this with some tunnel interface (gif, gre, ideally >> > ipsec secured). The best way to illustrate is using Cisco config >> > snippet: >> > >> > interface Tunnel1 >> > vrf forwarding vrf1 >> > ip address 192.168.0.1 255.255.255.252 >> > tunnel source Loopback0 >> > tunnel destination 172.16.0.1 >> > >> > This means outer layer uses base route table for tunnel creation, >> > while inner layer, packets/datagrams transferred over tunnel, use >> > other vrf. >> > >> > I tried to mimic this in FreeBSD with following commands: >> > >> > ifconfig gre1 create tunnel 172.16.1.1 172.16.0.1 vnet ov1 >> > jexec ov1 ifconfig gre1 10.1.0.2/30 10.1.0.1 >> > >> > This does not work. I found some older post which made me believing >> > this is caused by clearing whole tunnel configuration after moving >> > interface into different vnet. My (failed) tests indicate this is >> > most probably the cause. >> > >> > So, my question is, does anybody use tunnel interface similar way? >> > Is it possible to achieve what I am trying with netgraph? I am able >> > to create some inter-vnet link using epair interface, but this is >> > something different. Or ideally, is somebody using IPSEC with VNET >> > jails, processing encapsulating packets in base and raw content in >> > some child vnet? >> > >> >> Not sure if that helps you at all, but what I?ve done in the past is >> create a tunnel interface on the jailhost and add a devfs rule to >> allow access to it from within the vnet jail. I then run OpenVPN >> within that jail (so OpenVPN and tunnel interface are in the same >> jail). >> > > How would that devfs rule look like? Did you try that with multiple > OpenVPN processes? Where are routing rules for network to be accessed > via tunnels created? > >> It?s super stable, only issue is that you need to be careful when to >> release/destroy the interface on jail restart, otherwise it will >> become unavailable on the jailhost and in a (new) jail. > > I have no problem with stability, I just like to add ability to use > gif/gre/ipsec tunnel to my solution (I can connect to some remote LAN > via dedicated VLAN configured on ethernet, but this is of no use when > some network not under my control is to be crossed). > > Regards,> Milan