netmap: I got some troubles with netmap

Vincenzo Maffione v.maffione at gmail.com
Fri Jan 24 14:56:18 UTC 2014


2014/1/24 Wang Weidong <wangweidong1 at huawei.com>

> On 2014/1/20 20:39, Giuseppe Lettieri wrote:
> > Hi Wang,
> >
> > OK, you are using the netmap support in the upstream qemu git. That does
> not yet include all our modifications, some of which are very important for
> high throughput with VALE. In particular, the upstream qemu does not
> include the batching improvements in the frontend/backend interface, and it
> does not include the "map ring" optimization of the e1000 frontend. Please
> find attached a gzipped patch that contains all of our qemu code. The patch
> is against the latest upstream master (commit 1cf892ca).
> >
> > Please ./configure the patched qemu with the following option, in
> addition to any other option you may need:
> >
> > --enable-e1000-paravirt --enable-netmap \
> > --extra-cflags=-I/path/to/netmap/sys/directory
> >
> > Note that --enable-e1000-paravirt is needed to enable the "map ring"
> optimization in the e1000 frontend, even if you are not going to use the
> e1000-paravirt device.
> >
> > Now you should be able to rerun your tests. I am also attaching a README
> file that describes some more tests you may want to run.
> >
>
> Hello,


> Yes, I patch the qemu-netmap-bc767e701.patch to the qemu, download the
> 20131019-tinycore-netmap.hdd.
> And I do some test that:
>
> 1. I use the bridge below:
> qemu-system-x86_64 -m 2048 -boot c -net nic -net bridge,br=br1 -hda
> /home/wwd/tinycores/20131019-tinycore-netmap.hdd -enable-kvm -vnc :0
> test between two vms.
> br1 without device.
> Use pktgen, I got the 237.95 kpps.
> Use the netserver/netperf I got the speed 1037M bits/sec with TCP_STREAM.
> The max speed is up to 1621M.
> Use the netserver/netperf I got the speed 3296/s with TCP_RR
> Use the netserver/netperf I got the speed 234M/86M bits/sec with UDP_STREAM
>
> When I add a device from host to the br1, the speed is 159.86 kpps.
> Use the netserver/netperf I got the speed 720M bits/sec with TCP_STREAM.
> The max speed is up to 1000M.
> Use the netserver/netperf I got the speed 3556/s with TCP_RR
> Use the netserver/netperf I got the speed 181M/181M bits/sec with
> UDP_STREAM
>
> What do you think of these data?
>

You are using the old/deprecated QEMU command line syntax (-net), and
therefore honestly It's not clear to me what kind of network configuration
you are running.

Please use our scripts "launch-qemu.sh", "prep-taps.sh", according to what
described in the README.images file (attached).
Alternatively, use the syntax like in the following examples

(#1)   qemu-system-x86_64 archdisk.qcow -enable-kvm -device
virtio-net-pci,netdev=mynet -netdev
tap,ifname=tap01,id=mynet,script=no,downscript=no -smp 2
(#2)   qemu-system-x86_64 archdisk.qcow -enable-kvm -device
e1000,mitigation=off,mac=00:AA:BB:CC:DD:01,netdev=mynet -netdev
netmap,ifname=vale0:01,id=mynet -smp 2

so that it's clear to us what network frontend (e.g. emulated NIC) and
network backend (e.g. netmap, tap, vde, ecc..) you are using.
In example #1 we are using virtio-net as frontend and tap as backend, while
in example #2 we are using e1000 as frontend and netmap as backend.
Also consider giving more than one core (e.g. -smp 2) to each guest, to
mitigate receiver livelock problems.


>
> 2. I use the vale below:
> qemu-system-x86_64 -m 2048 -boot c -net nic -net netmap,vale0:0 -hda
> /home/wwd/tinycores/20131019-tinycore-netmap.hdd -enable-kvm -vnc :0
>
> Same for here, it's not clear what you are using. I guess each guest has
an e1000 device and is connected to a different port of the same vale
switch (e.g. vale0:0 and vale0:1)?

Test with 2 vms from the same host
> vale0 without device.
> I use the pkt-gen, the speed is 938 Kpps
>

You should get ~4Mpps with e1000 frontend + netmap backend on a reasonably
good machine. Make sure you have ./configure'd QEMU with
--enable-e1000-paravirt.


> I use netperf -H 10.0.0.2 -t UDP_STREAM, I got the speed is 195M/195M,
> then add -- -m 8, I only got 1.07M/1.07M.
> When use the smaller msg size, the speed will smaller?
>

If you use e1000 with netperf (without pkt-gen) your performance is doomed
to be horrible. Use e1000-paravirt (as a frontend) instead if you are
interested in netperf experiment.
Also consider that the point in using the "-- -m8" options is experimenting
high packet rates, so what you should measure here is not the througput in
Mbps, but the packet rate: netperf reports the number of packets sent and
received, so you can obtain the packet rate by dividing by the running time.
The throughput in Mbps is uninteresting, if you want high bulk throughput
you just don't use "-- -m 8", but leave the defaults.
Using virtio-net in this case will help because of the TSO offloadings.

cheers
  Vincenzo


>
> with vale-ctl -a vale0:eth2,
> use pkt-gen, the speed is 928 Kpps
> I use netperf -H 10.0.0.2 -t UDP_STREAM, I got the speed is 209M/208M,
> then add -- -m 8, I only got 1.06M/1.06M.
>
> with vale-ctl -h vale0:eth2,
> use pkt-gen, the speed is 928 Kpps
> I use netperf -H 10.0.0.2 -t UDP_STREAM, I got the speed is 192M/192M,
> then add -- -m 8, I only got 1.06M/1.06M.
>
> Test with 2 vms form two host,
> I only can test it by vale-ctl -h vale0:eth2 and set eth2 into promisc
> use pkt-gen with the default params, the speed is about 750 Kpps
> use netperf -H 10.0.0.2 -t UDP_STREAM, I got the speed is 160M/160M
> Is this right?
>

> 3. I can't use the l2 utils.
> When I do the "sudo l2open -t eth0 l2recv[l2send], I got that "l2open
> ioctl(TUNSETIFF...): Invalid argument"
> and "use l2open -r eth0 l2recv", wait a moment (only several seconds), I
> got the result:
> TEST-RESULT: 0.901 kpps 1pkts
> select/read=100.00 err=0
>
> And I can't find the l2 utils from the net? Is it implemented by your team?
>
> All of them is tested on vms.
>
> Cheers.
> Wang
>
>
> >
> > Cheers,
> > Giuseppe
> >
> > Il 17/01/2014 04:39, Wang Weidong ha scritto:
> >> On 2014/1/16 18:24, facoltà wrote:
> [...]
> >>
> >>
> >
> >
>
>
>


-- 
Vincenzo Maffione
-------------- next part --------------
A non-text attachment was scrubbed...
Name: README.images
Type: application/octet-stream
Size: 14991 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-net/attachments/20140124/b14b542a/attachment.obj>


More information about the freebsd-net mailing list