git: 8cd6af3324bf - stable/13 - vmm: take exclusive mem_segs_lock in vm_cleanup()

From: John Baldwin <jhb_at_FreeBSD.org>
Date: Thu, 26 Jan 2023 22:12:09 UTC
The branch stable/13 has been updated by jhb:

URL: https://cgit.FreeBSD.org/src/commit/?id=8cd6af3324bfb563a64d34b581e88efa46db6ac5

commit 8cd6af3324bfb563a64d34b581e88efa46db6ac5
Author:     Robert Wing <rew@FreeBSD.org>
AuthorDate: 2023-01-20 11:10:53 +0000
Commit:     John Baldwin <jhb@FreeBSD.org>
CommitDate: 2023-01-26 22:05:23 +0000

    vmm: take exclusive mem_segs_lock in vm_cleanup()
    
    The consumers of vm_cleanup() are vm_reinit() and vm_destroy().
    
    The vm_reinit() call path is, here vmmdev_ioctl() takes mem_segs_lock:
        vmmdev_ioctl()
        vm_reinit()
        vm_cleanup(destroy=false)
    
    The call path for vm_destroy() is (mem_segs_lock not taken):
        sysctl_vmm_destroy()
        vmmdev_destroy()
        vm_destroy()
        vm_cleanup(destroy=true)
    
    Fix this by taking mem_segs_lock in vm_cleanup() when destroy == true.
    
    Reviewed by:    corvink, markj, jhb
    Fixes:  67b69e76e8ee ("vmm: Use an sx lock to protect the memory map.")
    Differential Revision:  https://reviews.freebsd.org/D38071
    
    (cherry picked from commit c668e8173a8fc047b54a5c51b0fe4637e87836b6)
---
 sys/amd64/vmm/vmm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/sys/amd64/vmm/vmm.c b/sys/amd64/vmm/vmm.c
index 87f1d9e45d58..57c8555f08fe 100644
--- a/sys/amd64/vmm/vmm.c
+++ b/sys/amd64/vmm/vmm.c
@@ -576,6 +576,9 @@ vm_cleanup(struct vm *vm, bool destroy)
 	struct mem_map *mm;
 	int i;
 
+	if (destroy)
+		vm_xlock_memsegs(vm);
+
 	ppt_unassign_all(vm);
 
 	if (vm->iommu != NULL)
@@ -613,6 +616,7 @@ vm_cleanup(struct vm *vm, bool destroy)
 	if (destroy) {
 		for (i = 0; i < VM_MAX_MEMSEGS; i++)
 			vm_free_memseg(vm, i);
+		vm_unlock_memsegs(vm);
 
 		vmmops_vmspace_free(vm->vmspace);
 		vm->vmspace = NULL;