Network problems while running VirtualBox

Adam Vande More amvandemore at gmail.com
Thu Jul 14 03:02:14 UTC 2011


On Wed, Jul 13, 2011 at 8:55 PM, Peter Ross
<Peter.Ross at bogen.in-berlin.de>wrote:

> I am running named on the same box. I have overtime some errors there as
> well:
>
> Apr 13 05:17:41 bind named[23534]: internal_send: 192.168.50.145#65176:
> Cannot allocate memory
> Jun 21 23:30:44 bind named[39864]: internal_send: 192.168.50.251#36155:
> Cannot allocate memory
> Jun 24 15:28:00 bind named[39864]: internal_send: 192.168.50.251#28651:
> Cannot allocate memory
> Jun 28 12:57:52 bind named[2462]: internal_send: 192.168.165.154#1201:
> Cannot allocate memory
> Jul 13 19:43:05 bind named[4032]: internal_send: 192.168.167.147#52736:
> Cannot allocate memory
>
> coming from a sendmsg(2).
>
> My theory there is: my scp sends a lot data at the same time while the
> named is sending a lot of data over time - both increasing the likelyhood of
> the error.


That doesn't really answer the question if a using a different ssh binary
helps, but I'm guessing it won't.  You can try with different scp option
like encryption algo, compression, -l, and -v to see if any clues are
gained.


>
>
>   Do you have
>> any more info about the threshold of file size for when this problem
>> starts
>> occurring?  is it always the same?
>>
>
> No, it varies. Usually after a few GB. E.g. he last one lasted 11GB but I
> had failures below 8GB transfer before.
>

My machine specs are fairly similar to yours although this a mostly a
desktop system(virtualbox-ose-4.0.10).  I am unable to reproduce this error
after several attempts at scp'ing a 20GB /dev/random file around.  I assume
this would have been enough to trigger it on your system?


> EG if Vbox has 2 GB mapped out and you
>
>> get an error at a certain file size, does reducing the Vbox memory
>> footprint
>> allow a larger file to be successfully sent?
>>
>
> Given that the amount of data is randomly just now I cannot imagine how to
> get reliable numbers in this experiment.
>

I suspect this has less to do with actual memory and more to do with some
other buffer-like bottleneck.  Does tuning any of the network buffers make
any difference?  A couple to try:

net.inet.ip.intr_queue_maxlen
net.link.ifqmaxlen
kern.ipc.nmbclusters

If possible, does changing from VM bridged -> NAT or vice-versa result in
any behavior change?

-- 
Adam Vande More


More information about the freebsd-emulation mailing list