ptnetmap on bhyve - final status report

Stefano Garzarella stefanogarzarella at
Mon Aug 17 09:10:07 UTC 2015

Dear All,
I finished the last step on my GSoC 2015 project ("A FreeBSD/bhyve version
the netmap virtual passthrough (ptnetmap) for VMs").
In the last week I tested my code and I added more comments in bhyve,
and netmap, to describe my work.
I also added some lines in bhyve.8 man page to explain how use ptnetmap

My code is available here:
I used the 10-STABLE branch for my work.
To get the patches of modified modules, you can follow the following steps
on my
stable/10 branch:
    - bhyve
         svn diff -r 287649 usr.sbin/bhyve
    - virtio-net
         svn diff -r 287649 sys/dev/virtio/network
    - vmm.ko
         svn diff -r 287649 lib/libvmmapi sys/modules/vmm sys/amd64
    - netmap
         the changes will be shortly committed in HEAD and R10 by my mentor
         svn diff -r 287649 sys/conf/files sys/modules/netmap sys/dev/cxgbe
                                      sys/dev/netmap sys/net

The ptnetmap support for linux-KVM and QEMU is available here:

I implemented ptnetmap on bhyve working on the following steps:
 - bhyve network backends
         I reused the code developed by Luigi Rizzo (my mentor) and Vincenzo
         Maffione to support multiple backend in bhyve and to interface them
         with a fronteds.
         The backends availabale are:
            - tap
            - netmap (netmap, vale)
            - ptnetmap (ptnetmap, ptvale)

 - ptnetmap support on virtio-net device for FreeBSD
         I modified the virtio-net guest device driver and the virtio-net
         of thh host hypervisor (bhyve) to support ptnetmap.

                 - new PTNETMAP config flag in virtio-net device driver to
                   if ptnetmap is supported.
                 - ptnetmap device-specific code (netmap -

         host (hypervisor):
                 - ptnetmap support on virtio-net fronted (bhyve -
                 - ptnetmap backend [name: ptnetmap, ptvale] (bhyve -

- map netmap host memory into the guest
         I added a new IOCTL to vmm.ko to map an userspace guest buffer in
         guest VM. Then I implemented a new PCI device (ptnetmap-memdev) to
         the netmap host memory in the guest through PCI MEMORY-BAR.

         kernel host (vmm.ko):
                 - new VM_MAP_USER_BUF ioctl to map buffer in the guest

         userspace host (bhyve):
                 - vm_map_user_buf() in libvmmapi
                 - ptnetmap‐memdev device emulation (bhyve -

         kernel guest (netmap):
                 - device driver for ptnetmap-memdev inluded in netmap

 - ptnetmap support for FreeBSD host:
         I implemented kernel thread in netmap module to support ptnetmap on
         FreeBSD host.

         kernel host (netmap):
                 - nm_os_kthread_*() functions to handle netmap kthreads.
                   (netmap - netmap_freebsd.c ptnetmap.c)

 - netmap guest/host notification mechanisms.
         I needed two mechanisms:
         1) notification from ptnetmap kthread to guest VM (interrupt/irq)
             vmm.ko already has IOCTL to send interrupt to the guest and I
used it
             in the ptnetmap kernel threads.
         2) notification from guest VM to ptnetmap kthread (write into the
             specific register)
             I added new IOCTL on vmm.ko (VM_IO_REG_HANDLER) to catch
             on specific I/O address and send notification.
             For now I've implemented only one type of handler
             to use these events in the kernel through wakeup() and
             but I wrote the code to be easily extended to support other
type of
             handler (cond_signal, write/ioctl on fd, etc).

         kernel host (vmm.ko):
                 - new VM_IO_REG_HANDLER ioctl to catch write/read on
                   I/O address and to choose an handler. (eg.
                 - vm_io_reg_handler() in libvmmapi

         kernel host (netmap)
                 - msleep() on event_id.

         userspace host (bhyve)
                 - vm_io_reg_handler(VTCFG_R_QNOTIFY, event_id)
                   send guest notifications to ptnetmap kthreads when the
                   writes on this virtio register (bhyve -
                 - tell to netmap kthreads the event_id where they can wait


 Run a 2GB single‐CPU virtual machine with three network ports which use
 netmap and ptnetmap backends:

 bhyve ‐s 0,hostbridge ‐s 1,lpc \
         ‐s 2:1,virtio‐net,vale0:1 \            /* normal vale backend */
         ‐s 2:2,ptnetmap‐memdev \        /* ptnetmap-memdev is needed for
                                                            * ptnetmap port
                                                            * (If two or
more ptnetmap ports
                                                            * share the
same netmap memory allocator,
                                                            * only one
ptnetmap‐memdev is required)
         ‐s 2:3,virtio‐net,ptvale1:1 \         /* vale port in ptnetmap mode
                                                            * If "pt"
prefix is used, the port
                                                            * is opened in
passthrough mode (ptnetmap)
         ‐s 2:2,ptnetmap‐memdev \
         ‐s 2:3,virtio‐net,ptvale2{1 \         /* netmap-pipe in ptnetmap
mode */
         ‐s 3,ptnetmap‐memdev \
         ‐s 4,virtio‐net,ptnetmap:ix0 \      /* NIC in ptnetmap mode */
         ‐l com1,stdio ‐A ‐H ‐P ‐m 2G netmapvm

I have results (obtained with pkt-gen) very close to linux/KVM and netmap
(both linux and FreeBSD) experiments (physical devices [14.88 Mpps],
software switches
[25 Mpps], shared memory channels [50 Mpps]).
I used one instance of pkt-gen in the guest and the other one in the host.

Thanks for your help!
It was a pleasure working with you.


*Stefano Garzarella*
Software Engineer

e-mail: stefano.garzarella at

More information about the soc-status mailing list