vnet without epair

Teske, Devin Devin.Teske at
Sat Feb 9 23:12:21 UTC 2013

On Sat, 9 Feb 2013, Fbsd8 wrote:

> Nikos Vassiliadis wrote:
> > On 2/9/2013 5:57 PM, Fbsd8 wrote:
> >> Has any one been able to get RELEASE 9.1 to enable jail vnet without
> >> having to use epair?
> >
> > Yes, you can use vnet-enabled jails with several types of interfaces.
> > Physical ones like em0 etc, virtual ones like vlan0 etc, netgraph
> > ethernet-like interfaces like ngeth etc and if_epair interfaces.
> > What all these have in common is that they all are ethernet-like.
> >
> > You don't mention what kind of use and more or less most interfaces
> > are usable in a vnet jail. Could you share more on what you are
> > trying to achieve?
> >
> > Nikos
> >
> >
> Thanks for your reply and interest.
> What I am doing is writing documentation that describes the new 9.1 jail
> extensions for jail.conf and the rc.conf jail statements. I am going to
> submit changes to /etc/defaults/rc.conf and as long as I was on the jail
> subject thought I may as well include vnet because it was missing from
> /etc/defaults/rc.conf.

Thanks for taking this on.

> I did google search and could only find 9.0 vnet jails using epair.

I'm surprised you didn't find my own page on vnet jails using netgraph:

What I did was dup' the old rc.d/jail script one day and modify it to support vnet jails (read: it doesn't use jail.conf it uses the "old" style of rc.conf(5) parameters) with the built-in ability to do bridging with netgraph (if you enable the right kernel options and/or have the right modules loaded). It also supports shoving any whole interfaces into the vnet jails (be they real or pseudo interfaces, the only restriction is that it has to be a valid parameter in "ifconfig <interface> vnet <jail_id>".

ASIDE: The nice thing about using netgraph to do the bridging on the back-end is that "ngctl dot | dot -Tsvg -o netgraph.svg" creates nice pictures of your network layout (aside from being very versatile).

> It was my understanding that epair was not necessary
> to use vnet and thanks to you, you confirmed it.
> As part of this self-appointed project I plan to also update "man jail"
> and the handbook jail section which is really way out of date. I plan to
> include vnet in all aspects of this project. I must point out this is
> not just a writing project. I have been using rc.conf jail statements to
> configure jails for some time now,

I hope you'll look at my vimage package (we've been using it for a little over 12 months now). $work has been very happy with it to say the least.

> and have a test bed to test things I
> write about so I can verify what I write is true and valid. I am working
> with the author of the jail environment and already have discovered bugs
> which are being addressed. I have never played with vimage as it's
> labeled as experimental because it is not scp aware.

I think you mean it conflicts with SCTP (network protocol like UDP and TCP).

> IE: can not use more than a single cpu.

I'm not so sure about that.

> One of the 9.1 jail extensions deals with being able to use quotas
> inside of jails. I am excited to begin testing this new function.

Very cool -- looking forward to reading updates on that.

> During my jail research I have come across posts where people have to
> use a kernel patch to get xorg desktops to work inside of a jail. I have
> a separate post to questions list trying to mine some info on that subject.


> I am always open to input. If you have the background to support my
> efforts in this project its welcomed.

Yeah, we use vimages a lot at $work. For example, just yesterday, I had a need to move a machine into the server room but it wasn't in a rack-mountable case -- so I rsync'd the OS (minus /dev and /proc of course) to a directory on the vimage server, spent a minute or two copy/pasting in /etc/rc.conf, changing a couple values (like which em* interface to bridge to), and then I said "service vimage start [thename]" obsoleting the once-physical machine for a new vimage.

In this case, the server needed to run samba on a private network. Worked great. Freed up some workstation hardware for an actual workstation and a server that should have been in the rack is now running on server equipment as it should. It was a win for everybody and it took less than an hour (including the time to rsync).

Now only if I could find a graceful solution to rsync dying with out of memory errors on massive amounts of files and/or hard-links (rsync-3.0.7), I'd be all set!

The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.

More information about the freebsd-questions mailing list