git: 302c42610337 - stable/13 - riscv: Fix another race in pmap_pinit()

From: Mark Johnston <markj_at_FreeBSD.org>
Date: Tue, 01 Mar 2022 15:17:52 UTC
The branch stable/13 has been updated by markj:

URL: https://cgit.FreeBSD.org/src/commit/?id=302c426103379da3d7bdd4bafff4ada807e2ffbb

commit 302c426103379da3d7bdd4bafff4ada807e2ffbb
Author:     Mark Johnston <markj@FreeBSD.org>
AuthorDate: 2022-02-22 14:26:33 +0000
Commit:     Mark Johnston <markj@FreeBSD.org>
CommitDate: 2022-03-01 15:17:40 +0000

    riscv: Fix another race in pmap_pinit()
    
    Commit c862d5f2a789 ("riscv: Fix a race in pmap_pinit()") did not really
    fix the race.  Alan writes,
    
     Suppose that N entries in the L1 tables are in use, and we are in the
     middle of the memcpy().  Specifically, we have read the zero-filled
     (N+1)st entry from the kernel L1 table.  Then, we are preempted.  Now,
     another core/thread does pmap_growkernel(), which fills the (N+1)st
     entry.  Finally, we return to the original core/thread, and overwrite
     the valid entry with the zero that we earlier read.
    
    Try to fix the race properly, by copying kernel L1 entries while holding
    the allpmaps lock.  To avoid doing unnecessary work while holding this
    global lock, copy only the entries that we expect to be valid.
    
    Fixes:          c862d5f2a789 ("riscv: Fix a race in pmap_pinit()")
    Reported by:    alc, jrtc27
    Reviewed by:    alc
    Sponsored by:   The FreeBSD Foundation
    
    (cherry picked from commit d5c0a7b6d3923d2a6967810d0aa3e148a39351c1)
---
 sys/riscv/riscv/pmap.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/sys/riscv/riscv/pmap.c b/sys/riscv/riscv/pmap.c
index 97f96ebdf99e..91af051cf559 100644
--- a/sys/riscv/riscv/pmap.c
+++ b/sys/riscv/riscv/pmap.c
@@ -1231,12 +1231,20 @@ pmap_pinit(pmap_t pmap)
 
 	CPU_ZERO(&pmap->pm_active);
 
+	/*
+	 * Copy L1 entries from the kernel pmap.  This must be done with the
+	 * allpmaps lock held to avoid races with pmap_distribute_l1().
+	 */
 	mtx_lock(&allpmaps_lock);
 	LIST_INSERT_HEAD(&allpmaps, pmap, pm_list);
+	for (size_t i = pmap_l1_index(VM_MIN_KERNEL_ADDRESS);
+	    i < pmap_l1_index(VM_MAX_KERNEL_ADDRESS); i++)
+		pmap->pm_l1[i] = kernel_pmap->pm_l1[i];
+	for (size_t i = pmap_l1_index(DMAP_MIN_ADDRESS);
+	    i < pmap_l1_index(DMAP_MAX_ADDRESS); i++)
+		pmap->pm_l1[i] = kernel_pmap->pm_l1[i];
 	mtx_unlock(&allpmaps_lock);
 
-	memcpy(pmap->pm_l1, kernel_pmap->pm_l1, PAGE_SIZE);
-
 	vm_radix_init(&pmap->pm_root);
 
 	return (1);