IPC between vimage instances?

Ragnar Lonn raglon at packetfront.com
Mon Jun 13 11:46:38 GMT 2005


Julian Elischer wrote:

> Ragnar Lonn wrote:
>
>> Hello,
>>
>> I've been using vimage on FreeBSD 4.11 along with Netgraph to setup a 
>> system
>> that simulates many physical client machines for the purpose of 
>> testing broadband
>> Internet access hardware. I have hundreds of vimages, each with its 
>> own ngeth0
>> network interface connected via Netgraph to a real physical 
>> interface. It is working
>> very well indeed but now I'm trying to setup logging from the various 
>> vimage
>> instances and have run into problems. Each vimage runs applications that
>> I want to log the output from in an orderly manner. I'd like to use 
>> syslog but as
>> it turns out, the processes inside a vimage cannot communicate with 
>> the syslogd
>> in the "default" vimage. I tried logging to the Unix domain socket 
>> /var/run/log but
>> that didn't work from within a vimage (other than "default" of course).
>
>
> did you ask syslog to open sockets in all the chroots?
> I assume yes.. I hadn't realised that the vimage code separates
> unix domain sockets etc. but I guess that makes sense.


Actually, I don't chroot the vimages as of now. They see the same filesystem
all of them but writing to the syslogd unix domain socket didn't work 
anyhow.
I'm both happy and frustrated at seeing how well separated vimages are
sometimes :-)

> As you mention, the usual answer is to get the syslog on each system to
> forward everything to one logging system.
>
> you could add a second interface to each vimage just for logging to
> keep it separate from the testing..


Hmm, I have avoided this because I didn't want to do a lot of interface
housekeeping. Actually, this leads to another question of mine :-)

Network interfaces can't be removed under FreeBSD, something that
causes me a lot of trouble as I create many interfaces and move them to many
vimages. Then I remove vimages in order to create new ones (reconfigure the
client simulation setup) and the network interfaces get dumped into the 
default
vimage, from where I have to collect them. I cant just create new interfaces
when the setup is to be reconfigured because I can't delete the old 
interfaces.

Or can I?

Example:

ngctl mkpeer . eiface hook ether

...results in ngeth0 at deafult being created. Then I do:

ngctl shutdown ngeth0:

..and the interface is gone. Seems that doing a shutdown actually causes the
interface to get removed, right?  But then I do something like this:

# create ngeth0 at default
ngctl mkpeer . eiface hook ether
# create ngeth1 at default
ngctl mkpeer . eiface hook ether
# move ngeth1
vimage -i myvimage ngeth1

....the interface is moved to ngeth0 at myvimage. Then I do:

ngctl shutdown ngeth0:
vimage myvimage
vimage -i - ngeth0

...and the interface is moved back to the deafult vimage, BUT it is
named ngeth1 at default. Even though ngeth0 at default has been shutdown
and is nowhere to be seen. This makes me suspect that interfaces aren't
properly removed when I issue a shutdown even though they might seem to
be gone, and I have therefore decided to reuse interfaces, rather than
remove them.

Is this assumption correct?    Or is it just a naming issue that won't 
result
in some resource exhaustion eventually if I continue creating, moving
and removing interfaces?

Being able to remove interfaces would be really great. Then I could
create extra logging interfaces in each vimage and not worry about
the cleanup nightmare afterwards. Right now, I have a lot of script
code just to find and reuse old ngeth interfaces sitting around in the
default vimage and if I'm to have two types of those interfaces
(one for logging, that has one underlying netgraph tree structure, and
 one for test traffic, using another netgraph tree structure) it would
likely be at least twice as much trouble. That's why I was looking for
some other way of communicating between different vimages.

Regards,

  /Ragnar









More information about the freebsd-net mailing list