svn commit: r354676 - stable/12/sys/amd64/vmm

Andriy Gapon avg at FreeBSD.org
Wed Nov 13 07:41:20 UTC 2019


Author: avg
Date: Wed Nov 13 07:41:19 2019
New Revision: 354676
URL: https://svnweb.freebsd.org/changeset/base/354676

Log:
  MFC r353747: vmm: remove a wmb() call
  
  After removing wmb(), vm_set_rendezvous_func() became super trivial, so
  there was no point in keeping it.
  
  The wmb (sfence on amd64, lock nop on i386) was not needed.  This can be
  explained from several points of view.
  
  First, wmb() is used for store-store ordering (although, the primitive
  is undocumented).  There was no obvious subsequent store that needed the
  barrier.
  
  Second, x86 has a memory model with strong ordering including total
  store order.  An explicit store barrier may be needed only when working
  with special memory (device, special caching mode) or using special
  instructions (non-temporal stores).  That was not the case for this
  code.
  
  Third, I believe that there is a misconception that sfence "flushes" the
  store buffer in a sense that it speeds up the propagation of stores from
  the store buffer to the global visibility.  I think that such
  propagation always happens as fast as possible.  sfence only makes
  subsequent stores wait for that propagation to complete.  So, sfence is
  only useful for ordering of stores and only in the situations described
  above.

Modified:
  stable/12/sys/amd64/vmm/vmm.c
Directory Properties:
  stable/12/   (props changed)

Modified: stable/12/sys/amd64/vmm/vmm.c
==============================================================================
--- stable/12/sys/amd64/vmm/vmm.c	Wed Nov 13 07:39:20 2019	(r354675)
+++ stable/12/sys/amd64/vmm/vmm.c	Wed Nov 13 07:41:19 2019	(r354676)
@@ -1236,22 +1236,6 @@ vcpu_require_state_locked(struct vm *vm, int vcpuid, e
 		panic("Error %d setting state to %d", error, newstate);
 }
 
-static void
-vm_set_rendezvous_func(struct vm *vm, vm_rendezvous_func_t func)
-{
-
-	KASSERT(mtx_owned(&vm->rendezvous_mtx), ("rendezvous_mtx not locked"));
-
-	/*
-	 * Update 'rendezvous_func' and execute a write memory barrier to
-	 * ensure that it is visible across all host cpus. This is not needed
-	 * for correctness but it does ensure that all the vcpus will notice
-	 * that the rendezvous is requested immediately.
-	 */
-	vm->rendezvous_func = func;
-	wmb();
-}
-
 #define	RENDEZVOUS_CTR0(vm, vcpuid, fmt)				\
 	do {								\
 		if (vcpuid >= 0)					\
@@ -1282,7 +1266,7 @@ vm_handle_rendezvous(struct vm *vm, int vcpuid)
 		if (CPU_CMP(&vm->rendezvous_req_cpus,
 		    &vm->rendezvous_done_cpus) == 0) {
 			VCPU_CTR0(vm, vcpuid, "Rendezvous completed");
-			vm_set_rendezvous_func(vm, NULL);
+			vm->rendezvous_func = NULL;
 			wakeup(&vm->rendezvous_func);
 			break;
 		}
@@ -2536,7 +2520,7 @@ restart:
 	vm->rendezvous_req_cpus = dest;
 	CPU_ZERO(&vm->rendezvous_done_cpus);
 	vm->rendezvous_arg = arg;
-	vm_set_rendezvous_func(vm, func);
+	vm->rendezvous_func = func;
 	mtx_unlock(&vm->rendezvous_mtx);
 
 	/*


More information about the svn-src-all mailing list