cvs commit: src/sys/i386/i386 pmap.c

Peter Wemm peter at FreeBSD.org
Fri Apr 28 19:05:10 UTC 2006


peter       2006-04-28 19:05:09 UTC

  FreeBSD src repository

  Modified files:
    sys/i386/i386        pmap.c 
  Log:
  Interim fix for pmap problems I introduced with my last commit.
  Remove the code to dyanmically change the pv_entry limits.  Go back
  to a single fixed kva reservation for pv entries, like was done
  before when using the uma zone.  Go back to never freeing pages
  back to the free pool after they are no longer used, just like
  before.
  
  This stops the lock order reversal due to aquiring the kernel map
  lock while pmap was locked.
  
  This fixes the recursive panic if invariants are enabled.
  
  The problem was that allocating/freeing kva causes vm_map_entry
  nodes to be allocated/freed.  That can recurse back into pmap as
  new pages are hooked up to kvm and hence all the problem.
  Allocating/freeing kva indirectly allocate/frees memory.
  
  So, by going back to a single fixed size kva block and an index,
  we avoid the recursion panics and the LOR.
  
  The problem is that now with a linear block of kva, we have no
  mechanism to track holes once pages are freed.  UMA has the same
  problem when using custom object for a zone and a fixed reservation
  of kva.  Simple solutions like having a bitmap would work, but would
  be very inefficient when there are hundreds of thousands of bits
  in the map.  A first-free pointer is similarly flawed because pages
  can be freed at random and the first-free pointer would be rewinding
  huge amounts.  If we could allocate memory for tree strucures or
  an external freelist, that would work.  Except we cannot allocate/free
  memory here because we cannot allocate/free address space to use
  it in.  Anyway, my change here reverts back to the UMA behavior of
  not freeing pages for now, thereby avoiding holes in the map.
  
  ups@ had a truely evil idea that I'll investigate.  It should allow
  freeing unused pages again by giving us a no-cost way to track the
  holes in the kva block.  But in the meantime,  this should get people
  booting with witness and/or invariants again.
  
  Footnote: amd64 doesn't have this problem because of the direct map
  access method.  I'd done all my witness/invariants testing there.  I'd
  never considered that the harmless-looking kmem_alloc/kmem_free calls
  would cause such a problem and it didn't show up on the boot test.
  
  Revision  Changes    Path
  1.553     +65 -82    src/sys/i386/i386/pmap.c


More information about the cvs-src mailing list