From nobody Mon Aug 15 06:53:03 2022 X-Original-To: freebsd-net@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with UTF8SMTP id 4M5lMk33DVz4YVQw for ; Mon, 15 Aug 2022 06:52:34 +0000 (UTC) (envelope-from freebsd-net@dino.sk) Received: from cm0.netlabit.sk (mailhost.netlabit.sk [84.245.65.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with UTF8SMTPS id 4M5lMg2Mjdz3h57 for ; Mon, 15 Aug 2022 06:52:30 +0000 (UTC) (envelope-from freebsd-net@dino.sk) Received: from zeta.dino.sk ([84.245.95.254]) (AUTH: LOGIN milan, TLS: TLSv1.3,256bits,TLS_AES_256_GCM_SHA384) by cm0.netlabit.sk with ESMTPSA id 0000000000D64F98.0000000062F9ED26.0000D948; Mon, 15 Aug 2022 08:52:22 +0200 Date: Mon, 15 Aug 2022 08:53:03 +0200 From: Milan Obuch To: freebsd-net@freebsd.org Subject: Tunnel interfaces and vnet boundary crossing Message-ID: <20220815085303.2c5cdb02@zeta.dino.sk> X-Mailer: Claws Mail 3.19.0 (GTK+ 2.24.33; amd64-portbld-freebsd13.1) List-Id: Networking and TCP/IP with FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-net List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-net@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 4M5lMg2Mjdz3h57 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of freebsd-net@dino.sk designates 84.245.65.72 as permitted sender) smtp.mailfrom=freebsd-net@dino.sk X-Spamd-Result: default: False [-3.30 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-1.000]; R_SPF_ALLOW(-0.20)[+mx]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; MLMMJ_DEST(0.00)[freebsd-net@freebsd.org]; RCVD_VIA_SMTP_AUTH(0.00)[]; MIME_TRACE(0.00)[0:+]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:5578, ipnet:84.245.64.0/18, country:SK]; MID_RHS_MATCH_FROMTLD(0.00)[]; RCPT_COUNT_ONE(0.00)[1]; DMARC_NA(0.00)[dino.sk]; ARC_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; TO_DN_NONE(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RCVD_TLS_ALL(0.00)[] X-ThisMailContainsUnwantedMimeParts: N Hi, some time ago I managed to design and implement multi-tenant OpenVPN server using vnet jails. This way I am able to use more OpenVPN instances on single public IP. This is made possible using tun/tap interface property allowing to cross vnet boundary - here is part of my initialisation command sequence for one instance: jail -c name=ov1 vnet persist jexec ov1 hostname -s ov1 jexec ov1 ifconfig lo0 127.0.0.1/8 jexec ov1 sysctl net.inet.ip.forwarding=1 ifconfig tun1 create vnet ov1 /usr/local/sbin/openvpn --cd /usr/local/etc/openvpn --daemon ov1 --config ov1.cfg --writepid /var/run/ov1.pid In ov1.cfg, relevant bits are port 1001 management localhost 2001 dev tun1 (Actual numbers are different, but important thing is how they relate together.) This way, OpenVPN process runs in base vnet, using one side of pre-created tun/tap interface, while networking uses the other side of this interface in child vnet, isolated from base vnet (and other OpenVPN instances as well). Presently, I am using vlan interfaces on one ethernet interface to connect individual instances to their respective local network. I'd like to replace this with some tunnel interface (gif, gre, ideally ipsec secured). The best way to illustrate is using Cisco config snippet: interface Tunnel1 vrf forwarding vrf1 ip address 192.168.0.1 255.255.255.252 tunnel source Loopback0 tunnel destination 172.16.0.1 This means outer layer uses base route table for tunnel creation, while inner layer, packets/datagrams transferred over tunnel, use other vrf. I tried to mimic this in FreeBSD with following commands: ifconfig gre1 create tunnel 172.16.1.1 172.16.0.1 vnet ov1 jexec ov1 ifconfig gre1 10.1.0.2/30 10.1.0.1 This does not work. I found some older post which made me believing this is caused by clearing whole tunnel configuration after moving interface into different vnet. My (failed) tests indicate this is most probably the cause. So, my question is, does anybody use tunnel interface similar way? Is it possible to achieve what I am trying with netgraph? I am able to create some inter-vnet link using epair interface, but this is something different. Or ideally, is somebody using IPSEC with VNET jails, processing encapsulating packets in base and raw content in some child vnet? Regards, Milan