[Bug 207446] Hang bringing up vtnet(4) on >8 cpu GCE VMs

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Fri Feb 26 22:17:36 UTC 2016


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=207446

--- Comment #6 from Andy Carrel <wac at google.com> ---
After further investigation it looks like the driver is accidentally using
driver's vtnet_max_vq_pairs*2 + 1 for the control virtqueue instead of device's
max_virtqueue_pairs*2 + 1.

I'm about to attach a patch to current which propagates the device's
max_virtqueue_pairs number in order to make sure the control virtqueue winds up
in the correct place per the virtio spec. "vt_device_max_vq_pairs"  The patch
also exposes this as a read-only sysctl dev.vtnet.X.device_max_vq_pairs.

e.g. # sysctl -a | grep vq_pair
dev.vtnet.0.act_vq_pairs: 3
dev.vtnet.0.max_vq_pairs: 3
dev.vtnet.0.device_max_vq_pairs: 16

I've tested the patch successfully with a VM that supports 16
max_virtqueue_pairs with vtnet_max_vq_pairs at the default of 8, as well as
hw.vtnet.mq_max_pairs=3, and with hw.vtnet.mq_disable=1.

It'd be nice to include the original patch that raises VTNET_MAX_QUEUE_PAIRS as
well though since that should have some performance advantages on many cpu-ed
VMs.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the freebsd-amd64 mailing list