arm pmap locking

Alan Cox alc at rice.edu
Sat Sep 8 17:52:01 UTC 2012


On 09/03/2012 20:44, Ian Lepore wrote:
> On Mon, 2012-09-03 at 17:54 -0500, Alan Cox wrote:
>> On 08/30/2012 13:12, Ian Lepore wrote:
>>> On Tue, 2012-08-28 at 13:49 -0500, Alan Cox wrote:
>>>> Can you please retry with the attached patch?  For the time being, I
>>>> decided to address the above problem by simply enabling recursion on the
>>>> new pmap lock.  As I mentioned in my prior message, the lock recursion
>>>> in the arm pmap is a mistake.  However, I'd rather not change two things
>>>> at once, i.e., replace the page queues lock and fix the lock recursion.
>>>> I'll take a look at eliminating the lock recursion later this week.
>>>>
>>>> Thanks,
>>>> Alan
>>>>
>>> Sorry for the delay, I finally got around to trying this today, and it
>>> seems to be working well initially -- it boots to multiuser and the only
>>> difference in the dmesg.boot with and without the patch is the compile
>>> date, and the kernel image is 128 bytes smaller with the patch.  I've
>>> got DIAGNOSTIC and INVARIANTS enabled; I'll run with the patch in place
>>> and let you know if anything glitches.
>>>
>> Could you please test the attached patch?  This is a small step toward
>> disentangling the arm pmap locking.
>>
>> Alan
>>
> Applied the patch, it's running just fine.
>

Here is another patch.  This simplifies the kernel pmap locking in 
pmap_enter_pv() and corrects some comments.

Thanks in advance,
Alan

-------------- next part --------------
Index: arm/arm/pmap.c
===================================================================
--- arm/arm/pmap.c	(revision 240166)
+++ arm/arm/pmap.c	(working copy)
@@ -1588,11 +1588,11 @@ pmap_clearbit(struct vm_page *pg, u_int maskbits)
  */
 
 /*
- * pmap_enter_pv: enter a mapping onto a vm_page lst
+ * pmap_enter_pv: enter a mapping onto a vm_page's PV list
  *
  * => caller should hold the proper lock on pvh_global_lock
  * => caller should have pmap locked
- * => we will gain the lock on the vm_page and allocate the new pv_entry
+ * => we will (someday) gain the lock on the vm_page's PV list
  * => caller should adjust ptp's wire_count before calling
  * => caller should not adjust pmap's wire_count
  */
@@ -1600,33 +1600,26 @@ static void
 pmap_enter_pv(struct vm_page *pg, struct pv_entry *pve, pmap_t pm,
     vm_offset_t va, u_int flags)
 {
-	int km;
 
 	rw_assert(&pvh_global_lock, RA_WLOCKED);
-
+	PMAP_ASSERT_LOCKED(pm);
 	if (pg->md.pv_kva != 0) {
-		/* PMAP_ASSERT_LOCKED(pmap_kernel()); */
-		pve->pv_pmap = pmap_kernel();
+		pve->pv_pmap = kernel_pmap;
 		pve->pv_va = pg->md.pv_kva;
 		pve->pv_flags = PVF_WRITE | PVF_UNMAN;
+		if (pm != kernel_pmap)
+			PMAP_LOCK(kernel_pmap);
+		TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list);
+		TAILQ_INSERT_HEAD(&kernel_pmap->pm_pvlist, pve, pv_plist);
+		if (pm != kernel_pmap)
+			PMAP_UNLOCK(kernel_pmap);
 		pg->md.pv_kva = 0;
-
-		if (!(km = PMAP_OWNED(pmap_kernel())))
-			PMAP_LOCK(pmap_kernel());
-		TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list);
-		TAILQ_INSERT_HEAD(&pve->pv_pmap->pm_pvlist, pve, pv_plist);
-		PMAP_UNLOCK(pmap_kernel());
 		if ((pve = pmap_get_pv_entry()) == NULL)
 			panic("pmap_kenter_pv: no pv entries");
-		if (km)
-			PMAP_LOCK(pmap_kernel());
 	}
-
-	PMAP_ASSERT_LOCKED(pm);
 	pve->pv_pmap = pm;
 	pve->pv_va = va;
 	pve->pv_flags = flags;
-
 	TAILQ_INSERT_HEAD(&pg->md.pv_list, pve, pv_list);
 	TAILQ_INSERT_HEAD(&pm->pm_pvlist, pve, pv_plist);
 	pg->md.pvh_attrs |= flags & (PVF_REF | PVF_MOD);


More information about the freebsd-arm mailing list