svn commit: r269134 - head/sys/vm
Alan Cox
alc at rice.edu
Thu Jul 31 18:35:01 UTC 2014
On 07/31/2014 04:00, Andreas Tobler wrote:
> On 31.07.14 06:35, Alan Cox wrote:
>> On 07/30/2014 16:26, Andreas Tobler wrote:
>>> On 30.07.14 23:17, Alan Cox wrote:
>>>> On 07/30/2014 15:15, Andreas Tobler wrote:
>>>>> On 30.07.14 21:54, Alan Cox wrote:
>>>>>> On 07/30/2014 14:46, Alan Cox wrote:
>>>>>>> On 07/30/2014 13:58, Andreas Tobler wrote:
>>>>>>>> Hi Alan,
>>>>>>>>
>>>>>>>> On 26.07.14 20:10, Alan Cox wrote:
>>>>>>>>> Author: alc
>>>>>>>>> Date: Sat Jul 26 18:10:18 2014
>>>>>>>>> New Revision: 269134
>>>>>>>>> URL: http://svnweb.freebsd.org/changeset/base/269134
>>>>>>>>>
>>>>>>>>> Log:
>>>>>>>>> When unwiring a region of an address space, do not
>>>>>>>>> assume that
>>>>>>>>> the
>>>>>>>>> underlying physical pages are mapped by the pmap. If, for
>>>>>>>>> example, the
>>>>>>>>> application has performed an mprotect(..., PROT_NONE) on
>>>>>>>>> any part
>>>>>>>>> of the
>>>>>>>>> wired region, then those pages will no longer be mapped
>>>>>>>>> by the
>>>>>>>>> pmap.
>>>>>>>>> So, using the pmap to lookup the wired pages in order to
>>>>>>>>> unwire them
>>>>>>>>> doesn't always work, and when it doesn't work wired
>>>>>>>>> pages are
>>>>>>>>> leaked.
>>>>>>>>>
>>>>>>>>> To avoid the leak, introduce and use a new function
>>>>>>>>> vm_object_unwire()
>>>>>>>>> that locates the wired pages by traversing the object
>>>>>>>>> and its
>>>>>>>>> backing
>>>>>>>>> objects.
>>>>>>>>>
>>>>>>>>> At the same time, switch from using pmap_change_wiring() to
>>>>>>>>> the
>>>>>>>>> recently
>>>>>>>>> introduced function pmap_unwire() for unwiring the region's
>>>>>>>>> mappings.
>>>>>>>>> pmap_unwire() is faster, because it operates a range of
>>>>>>>>> virtual
>>>>>>>>> addresses
>>>>>>>>> rather than a single virtual page at a time. Moreover, by
>>>>>>>>> operating on
>>>>>>>>> a range, it is superpage friendly. It doesn't waste time
>>>>>>>>> performing
>>>>>>>>> unnecessary demotions.
>>>>>>>>>
>>>>>>>>> Reported by: markj
>>>>>>>>> Reviewed by: kib
>>>>>>>>> Tested by: pho, jmg (arm)
>>>>>>>>> Sponsored by: EMC / Isilon Storage Division
>>>>>>>> This commit brings my 32- and 64-bit PowerMac's into panic.
>>>>>>>> Unfortunately I'm not able to give you a backtrace in the form
>>>>>>>> of a
>>>>>>>> textdump nor of a core dump.
>>>>>>>>
>>>>>>>> The only thing I have is this picture:
>>>>>>>>
>>>>>>>> http://people.freebsd.org/~andreast/r269134_panic.jpg
>>>>>>>>
>>>>>>>> Exactly this revision gives a panic and breaks the
>>>>>>>> textdump/coredump
>>>>>>>> facility.
>>>>>>>>
>>>>>>>> How can I help debugging?
>>>>>>>>
>>>>>>> It appears to me that moea64_pvo_enter() had a pre-existing bug
>>>>>>> that
>>>>>>> got
>>>>>>> tickled by this change. Specifically, moea64_pvo_enter()
>>>>>>> doesn't set
>>>>>>> the PVO_WIRED flag when an unwired mapping already exists. It just
>>>>>>> returns with the mapping still in an unwired state. Consequently,
>>>>>>> when
>>>>>>> pmap_unwire() finally runs, it doesn't find a wired mapping.
>>>>>>>
>>>>>>> Try this:
>>>>>>>
>>>>>>> Index: powerpc/aim/mmu_oea64.c
>>>>>>> ===================================================================
>>>>>>> --- powerpc/aim/mmu_oea64.c (revision 269127)
>>>>>>> +++ powerpc/aim/mmu_oea64.c (working copy)
>>>>>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
>>>>>>> uma_zone_t
>>>>>>> if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) ==
>>>>>>> va) {
>>>>>>> if ((pvo->pvo_pte.lpte.pte_lo &
>>>>>>> LPTE_RPGN)
>>>>>>> == pa &&
>>>>>>> (pvo->pvo_pte.lpte.pte_lo &
>>>>>>> (LPTE_NOEXEC |
>>>>>>> LPTE_PP))
>>>>>>> - == (pte_lo & (LPTE_NOEXEC |
>>>>>>> LPTE_PP))) {
>>>>>>> + == (pte_lo & (LPTE_NOEXEC |
>>>>>>> LPTE_PP)) &&
>>>>>>> + ((pvo->pvo_vaddr ^ flags) &
>>>>>>> PVO_WIRED)) {
>>>>>>> if (!(pvo->pvo_pte.lpte.pte_hi &
>>>>>>> LPTE_VALID)) {
>>>>>>> /* Re-insert if
>>>>>>> spilled */
>>>>>>> i =
>>>>>>> MOEA64_PTE_INSERT(mmu,
>>>>>>> ptegidx,
>>>>>>>
>>>>>>
>>>>>> The new conditional test needs to be inverted. Try this instead:
>>>>>>
>>>>>> Index: powerpc/aim/mmu_oea64.c
>>>>>> ===================================================================
>>>>>> --- powerpc/aim/mmu_oea64.c (revision 269127)
>>>>>> +++ powerpc/aim/mmu_oea64.c (working copy)
>>>>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm,
>>>>>> uma_zone_t
>>>>>> if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) ==
>>>>>> va) {
>>>>>> if ((pvo->pvo_pte.lpte.pte_lo &
>>>>>> LPTE_RPGN)
>>>>>> == pa &&
>>>>>> (pvo->pvo_pte.lpte.pte_lo &
>>>>>> (LPTE_NOEXEC |
>>>>>> LPTE_PP))
>>>>>> - == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
>>>>>> + == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) &&
>>>>>> + ((pvo->pvo_vaddr ^ flags) &
>>>>>> PVO_WIRED) ==
>>>>>> 0) {
>>>>>> if (!(pvo->pvo_pte.lpte.pte_hi &
>>>>>> LPTE_VALID)) {
>>>>>> /* Re-insert if
>>>>>> spilled */
>>>>>> i =
>>>>>> MOEA64_PTE_INSERT(mmu,
>>>>>> ptegidx,
>>>>>>
>>>>>
>>>>>
>>>>> The panic stays, but the message is different:
>>>>>
>>>>> panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
>>>>> moea64_pteg_table but valid in pvo.
>>>>>
>>>>
>>>> My attempted fix is doing something else wrong. Do you have a stack
>>>> trace?
>>>
>>> iPhone sei Dank:
>>>
>>> http://people.freebsd.org/~andreast/r269134-1_panic.jpg
>>
>> Ok, this patch should fix both the original panic and the new one. They
>> are two distinct problems.
>
> Yep, thank you!
>
> Additionally I tried to adapt the 32-bit path and successfully booted
> the below, ok?
>
> Again, thanks a lot!
> Andreas
>
> Index: powerpc/aim/mmu_oea.c
> ===================================================================
> --- powerpc/aim/mmu_oea.c (revision 269326)
> +++ powerpc/aim/mmu_oea.c (working copy)
> @@ -1950,7 +1950,8 @@
> if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) {
> if ((pvo->pvo_pte.pte.pte_lo & PTE_RPGN) == pa &&
> (pvo->pvo_pte.pte.pte_lo & PTE_PP) ==
> - (pte_lo & PTE_PP)) {
> + (pte_lo & PTE_PP) &&
> + ((pvo->pvo_vaddr ^ flags) & PVO_WIRED) == 0) {
> mtx_unlock(&moea_table_mutex);
> return (0);
> }
>
>
Here is a better fix for the problem in moea64_pvo_enter(). The
original fix destroys and recreates the PTE in order to wire it. This
new fix simply updates the PTE.
In the case of moea_pvo_enter(), there is also no need to destroy and
recreate the PTE.
-------------- next part --------------
Index: powerpc/aim/mmu_oea.c
===================================================================
--- powerpc/aim/mmu_oea.c (revision 269127)
+++ powerpc/aim/mmu_oea.c (working copy)
@@ -1951,7 +1951,21 @@ moea_pvo_enter(pmap_t pm, uma_zone_t zone, struct
if ((pvo->pvo_pte.pte.pte_lo & PTE_RPGN) == pa &&
(pvo->pvo_pte.pte.pte_lo & PTE_PP) ==
(pte_lo & PTE_PP)) {
+ /*
+ * The PTE is not changing. Instead, this may
+ * be a request to change the mapping's wired
+ * attribute.
+ */
mtx_unlock(&moea_table_mutex);
+ if ((flags & PVO_WIRED) != 0 &&
+ (pvo->pvo_vaddr & PVO_WIRED) == 0) {
+ pvo->pvo_vaddr |= PVO_WIRED;
+ pm->pm_stats.wired_count++;
+ } else if ((flags & PVO_WIRED) == 0 &&
+ (pvo->pvo_vaddr & PVO_WIRED) != 0) {
+ pvo->pvo_vaddr &= ~PVO_WIRED;
+ pm->pm_stats.wired_count--;
+ }
return (0);
}
moea_pvo_remove(pvo, -1);
Index: powerpc/aim/mmu_oea64.c
===================================================================
--- powerpc/aim/mmu_oea64.c (revision 269339)
+++ powerpc/aim/mmu_oea64.c (working copy)
@@ -2234,6 +2234,7 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
uint64_t pte_lo, int flags)
{
struct pvo_entry *pvo;
+ uintptr_t pt;
uint64_t vsid;
int first;
u_int ptegidx;
@@ -2276,7 +2277,28 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) == pa &&
(pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC | LPTE_PP))
== (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
+ /*
+ * The physical page and protection are not
+ * changing. Instead, this may be a request
+ * to change the mapping's wired attribute.
+ */
+ pt = -1;
+ if ((flags & PVO_WIRED) != 0 &&
+ (pvo->pvo_vaddr & PVO_WIRED) == 0) {
+ pt = MOEA64_PVO_TO_PTE(mmu, pvo);
+ pvo->pvo_vaddr |= PVO_WIRED;
+ pvo->pvo_pte.lpte.pte_hi |= LPTE_WIRED;
+ pm->pm_stats.wired_count++;
+ } else if ((flags & PVO_WIRED) == 0 &&
+ (pvo->pvo_vaddr & PVO_WIRED) != 0) {
+ pt = MOEA64_PVO_TO_PTE(mmu, pvo);
+ pvo->pvo_vaddr &= ~PVO_WIRED;
+ pvo->pvo_pte.lpte.pte_hi &= ~LPTE_WIRED;
+ pm->pm_stats.wired_count--;
+ }
if (!(pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) {
+ KASSERT(pt == -1,
+ ("moea64_pvo_enter: valid pt"));
/* Re-insert if spilled */
i = MOEA64_PTE_INSERT(mmu, ptegidx,
&pvo->pvo_pte.lpte);
@@ -2283,6 +2305,14 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
if (i >= 0)
PVO_PTEGIDX_SET(pvo, i);
moea64_pte_overflow--;
+ } else if (pt != -1) {
+ /*
+ * The PTE's wired attribute is not a
+ * hardware feature, so there is no
+ * need to invalidate any TLB entries.
+ */
+ MOEA64_PTE_CHANGE(mmu, pt,
+ &pvo->pvo_pte.lpte, pvo->pvo_vpn);
}
return (0);
}
More information about the svn-src-all
mailing list