svn commit: r354799 - stable/12/sys/arm64/arm64
Alan Cox
alc at FreeBSD.org
Sun Nov 17 22:44:40 UTC 2019
Author: alc
Date: Sun Nov 17 22:44:38 2019
New Revision: 354799
URL: https://svnweb.freebsd.org/changeset/base/354799
Log:
MFC r352847,352930,354585
Eliminate redundant calls to critical_enter() and critical_exit() from
pmap_update_entry(). It suffices that interrupts are blocked.
In short, pmap_enter_quick_locked("user space", ..., VM_PROT_READ) doesn't
work. More precisely, it doesn't set ATTR_AP(ATTR_AP_USER) in the page
table entry, so any attempt to read from the mapped page by user space
generates a page fault. This problem has gone unnoticed because the page
fault handler, vm_fault(), will ultimately call pmap_enter(), which
replaces the non-working page table entry with one that has
ATTR_AP(ATTR_AP_USER) set. This change reduces the number of page faults
during a "buildworld" by about 19.4%.
Eliminate a redundant pmap_load() from pmap_remove_pages().
There is no reason why the pmap_invalidate_all() in pmap_remove_pages()
must be performed before the final PV list lock release. Move it past
the lock release.
Eliminate a stale comment from pmap_page_test_mappings(). We implemented
a modified bit in r350004.
Modified:
stable/12/sys/arm64/arm64/pmap.c
Directory Properties:
stable/12/ (props changed)
Modified: stable/12/sys/arm64/arm64/pmap.c
==============================================================================
--- stable/12/sys/arm64/arm64/pmap.c Sun Nov 17 20:56:25 2019 (r354798)
+++ stable/12/sys/arm64/arm64/pmap.c Sun Nov 17 22:44:38 2019 (r354799)
@@ -3019,7 +3019,6 @@ pmap_update_entry(pmap_t pmap, pd_entry_t *pte, pd_ent
* as they may make use of an address we are about to invalidate.
*/
intr = intr_disable();
- critical_enter();
/* Clear the old mapping */
pmap_clear(pte);
@@ -3029,7 +3028,6 @@ pmap_update_entry(pmap_t pmap, pd_entry_t *pte, pd_ent
pmap_store(pte, newpte);
dsb(ishst);
- critical_exit();
intr_restore(intr);
}
@@ -3763,8 +3761,8 @@ pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, v
ATTR_AP(ATTR_AP_RO) | L3_PAGE;
if ((prot & VM_PROT_EXECUTE) == 0 || m->md.pv_memattr == DEVICE_MEMORY)
l3_val |= ATTR_XN;
- else if (va < VM_MAXUSER_ADDRESS)
- l3_val |= ATTR_PXN;
+ if (va < VM_MAXUSER_ADDRESS)
+ l3_val |= ATTR_AP(ATTR_AP_USER) | ATTR_PXN;
/*
* Now validate mapping with RO protection
@@ -4330,7 +4328,6 @@ pmap_remove_pages(pmap_t pmap)
L2_BLOCK,
("Attempting to remove an invalid "
"block: %lx", tpte));
- tpte = pmap_load(pte);
break;
case 2:
pte = pmap_l2_to_l3(pde, pv->pv_va);
@@ -4448,17 +4445,15 @@ pmap_remove_pages(pmap_t pmap)
free_pv_chunk(pc);
}
}
- pmap_invalidate_all(pmap);
if (lock != NULL)
rw_wunlock(lock);
+ pmap_invalidate_all(pmap);
PMAP_UNLOCK(pmap);
vm_page_free_pages_toq(&free, true);
}
/*
- * This is used to check if a page has been accessed or modified. As we
- * don't have a bit to see if it has been modified we have to assume it
- * has been if the page is read/write.
+ * This is used to check if a page has been accessed or modified.
*/
static boolean_t
pmap_page_test_mappings(vm_page_t m, boolean_t accessed, boolean_t modified)
More information about the svn-src-all
mailing list