svn commit: r269134 - head/sys/vm

Andreas Tobler andreast at FreeBSD.org
Wed Jul 30 21:26:48 UTC 2014


On 30.07.14 23:17, Alan Cox wrote:
> On 07/30/2014 15:15, Andreas Tobler wrote:
>> On 30.07.14 21:54, Alan Cox wrote:
>>> On 07/30/2014 14:46, Alan Cox wrote:
>>>> On 07/30/2014 13:58, Andreas Tobler wrote:
>>>>> Hi Alan,
>>>>>
>>>>> On 26.07.14 20:10, Alan Cox wrote:
>>>>>> Author: alc
>>>>>> Date: Sat Jul 26 18:10:18 2014
>>>>>> New Revision: 269134
>>>>>> URL: http://svnweb.freebsd.org/changeset/base/269134
>>>>>>
>>>>>> Log:
>>>>>>      When unwiring a region of an address space, do not assume that
>>>>>> the
>>>>>>      underlying physical pages are mapped by the pmap.  If, for
>>>>>> example, the
>>>>>>      application has performed an mprotect(..., PROT_NONE) on any part
>>>>>> of the
>>>>>>      wired region, then those pages will no longer be mapped by the
>>>>>> pmap.
>>>>>>      So, using the pmap to lookup the wired pages in order to
>>>>>> unwire them
>>>>>>      doesn't always work, and when it doesn't work wired pages are
>>>>>> leaked.
>>>>>>
>>>>>>      To avoid the leak, introduce and use a new function
>>>>>> vm_object_unwire()
>>>>>>      that locates the wired pages by traversing the object and its
>>>>>> backing
>>>>>>      objects.
>>>>>>
>>>>>>      At the same time, switch from using pmap_change_wiring() to the
>>>>>> recently
>>>>>>      introduced function pmap_unwire() for unwiring the region's
>>>>>> mappings.
>>>>>>      pmap_unwire() is faster, because it operates a range of virtual
>>>>>> addresses
>>>>>>      rather than a single virtual page at a time.  Moreover, by
>>>>>> operating on
>>>>>>      a range, it is superpage friendly.  It doesn't waste time
>>>>>> performing
>>>>>>      unnecessary demotions.
>>>>>>
>>>>>>      Reported by:    markj
>>>>>>      Reviewed by:    kib
>>>>>>      Tested by:    pho, jmg (arm)
>>>>>>      Sponsored by:    EMC / Isilon Storage Division
>>>>> This commit brings my 32- and 64-bit PowerMac's into panic.
>>>>> Unfortunately I'm not able to give you a backtrace in the form of a
>>>>> textdump nor of a core dump.
>>>>>
>>>>> The only thing I have is this picture:
>>>>>
>>>>> http://people.freebsd.org/~andreast/r269134_panic.jpg
>>>>>
>>>>> Exactly this revision gives a panic and breaks the textdump/coredump
>>>>> facility.
>>>>>
>>>>> How can I help debugging?
>>>>>
>>>> It appears to me that moea64_pvo_enter() had a pre-existing bug that
>>>> got
>>>> tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
>>>> the PVO_WIRED flag when an unwired mapping already exists.  It just
>>>> returns with the mapping still in an unwired state.  Consequently, when
>>>> pmap_unwire() finally runs, it doesn't find a wired mapping.
>>>>
>>>> Try this:
>>>>
>>>> Index: powerpc/aim/mmu_oea64.c
>>>> ===================================================================
>>>> --- powerpc/aim/mmu_oea64.c     (revision 269127)
>>>> +++ powerpc/aim/mmu_oea64.c     (working copy)
>>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
>>>>                   if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) {
>>>>                           if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN)
>>>> == pa &&
>>>>                               (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC |
>>>> LPTE_PP))
>>>> -                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
>>>> +                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) &&
>>>> +                           ((pvo->pvo_vaddr ^ flags) & PVO_WIRED)) {
>>>>                                   if (!(pvo->pvo_pte.lpte.pte_hi &
>>>> LPTE_VALID)) {
>>>>                                           /* Re-insert if spilled */
>>>>                                           i = MOEA64_PTE_INSERT(mmu,
>>>> ptegidx,
>>>>
>>>
>>> The new conditional test needs to be inverted.  Try this instead:
>>>
>>> Index: powerpc/aim/mmu_oea64.c
>>> ===================================================================
>>> --- powerpc/aim/mmu_oea64.c     (revision 269127)
>>> +++ powerpc/aim/mmu_oea64.c     (working copy)
>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
>>>                   if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) {
>>>                           if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN)
>>> == pa &&
>>>                               (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC |
>>> LPTE_PP))
>>> -                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
>>> +                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) &&
>>> +                           ((pvo->pvo_vaddr ^ flags) & PVO_WIRED) ==
>>> 0) {
>>>                                   if (!(pvo->pvo_pte.lpte.pte_hi &
>>> LPTE_VALID)) {
>>>                                           /* Re-insert if spilled */
>>>                                           i = MOEA64_PTE_INSERT(mmu,
>>> ptegidx,
>>>
>>
>>
>> The panic stays, but the message is different:
>>
>> panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
>> moea64_pteg_table but valid in pvo.
>>
>
> My attempted fix is doing something else wrong.  Do you have a stack trace?

iPhone sei Dank:

http://people.freebsd.org/~andreast/r269134-1_panic.jpg

Thanks!
Andreas




More information about the svn-src-head mailing list