git: 0fc6eebbf763 - stable/13 - vm_fault: Fix vm_fault_populate()'s handling of VM_FAULT_WIRE

From: Mark Johnston <markj_at_FreeBSD.org>
Date: Tue, 28 Dec 2021 00:43:58 UTC
The branch stable/13 has been updated by markj:

URL: https://cgit.FreeBSD.org/src/commit/?id=0fc6eebbf76334602c418d3e7bd780dd28b11507

commit 0fc6eebbf76334602c418d3e7bd780dd28b11507
Author:     Mark Johnston <markj@FreeBSD.org>
AuthorDate: 2021-12-14 20:10:46 +0000
Commit:     Mark Johnston <markj@FreeBSD.org>
CommitDate: 2021-12-28 00:36:07 +0000

    vm_fault: Fix vm_fault_populate()'s handling of VM_FAULT_WIRE
    
    vm_map_wire() works by calling vm_fault(VM_FAULT_WIRE) on each page in
    the rage.  (For largepage mappings, it calls vm_fault() once per large
    page.)
    
    A pager's populate method may return more than one page to be mapped.
    If VM_FAULT_WIRE is also specified, we'd wire each page in the run, not
    just the fault page.  Consider an object with two pages mapped in a
    vm_map_entry, and suppose vm_map_wire() is called on the entry.  Then,
    the first vm_fault() would allocate and wire both pages, and the second
    would encounter a valid page upon lookup and wire it again in the
    regular fault handler.  So the second page is wired twice and will be
    leaked when the object is destroyed.
    
    Fix the problem by modify vm_fault_populate() to wire only the fault
    page.  Also modify the error handler for pmap_enter(psind=1) to not test
    fs->wired, since it must be false.
    
    PR:             260347
    Reviewed by:    alc, kib
    Sponsored by:   The FreeBSD Foundation
    
    (cherry picked from commit 88642d978a999aaa3752e86d2f54b1a6aba7fc85)
---
 sys/vm/vm_fault.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/sys/vm/vm_fault.c b/sys/vm/vm_fault.c
index 6445f7af59a1..41346f8635ea 100644
--- a/sys/vm/vm_fault.c
+++ b/sys/vm/vm_fault.c
@@ -597,21 +597,23 @@ vm_fault_populate(struct faultstate *fs)
 		    (psind > 0 && rv == KERN_PROTECTION_FAILURE));
 		if (__predict_false(psind > 0 &&
 		    rv == KERN_PROTECTION_FAILURE)) {
+			MPASS(!fs->wired);
 			for (i = 0; i < npages; i++) {
 				rv = pmap_enter(fs->map->pmap, vaddr + ptoa(i),
-				    &m[i], fs->prot, fs->fault_type |
-				    (fs->wired ? PMAP_ENTER_WIRED : 0), 0);
+				    &m[i], fs->prot, fs->fault_type, 0);
 				MPASS(rv == KERN_SUCCESS);
 			}
 		}
 
 		VM_OBJECT_WLOCK(fs->first_object);
 		for (i = 0; i < npages; i++) {
-			if ((fs->fault_flags & VM_FAULT_WIRE) != 0)
+			if ((fs->fault_flags & VM_FAULT_WIRE) != 0 &&
+			    m[i].pindex == fs->first_pindex)
 				vm_page_wire(&m[i]);
 			else
 				vm_page_activate(&m[i]);
-			if (fs->m_hold != NULL && m[i].pindex == fs->first_pindex) {
+			if (fs->m_hold != NULL &&
+			    m[i].pindex == fs->first_pindex) {
 				(*fs->m_hold) = &m[i];
 				vm_page_wire(&m[i]);
 			}