svn commit: r208589 - head/sys/mips/mips

Jayachandran C. c.jayachandran at gmail.com
Fri Jun 11 11:14:58 UTC 2010


On Wed, Jun 9, 2010 at 11:41 PM, Jayachandran C.
<c.jayachandran at gmail.com> wrote:
> On Wed, Jun 9, 2010 at 11:20 AM, Jayachandran C.
> <c.jayachandran at gmail.com> wrote:
>> On Wed, Jun 9, 2010 at 3:01 AM, Jayachandran C.
>> <c.jayachandran at gmail.com> wrote:
>>> On Tue, Jun 8, 2010 at 12:03 PM, Alan Cox <alc at cs.rice.edu> wrote:
>>>>
>>>> VM_FREEPOOL_DIRECT is used by at least amd64 and ia64 for page table pages
>>>> and small kernel memory allocations.  Unlike mips, these machines don't have
>>>> MMU support for a direct map.  Their direct maps are just a range of
>>>> mappings in the regular (kernel) page table.  So, unlike mips, accesses
>>>> through their direct map may still miss in the TLB and require a page table
>>>> walk.  VM_FREEPOOL_* is a way to increase the physical locality (or
>>>> clustering) of page allocations, so that, for example, page table page
>>>> accesses by the pmap on amd64 are less likely to miss in the TLB.  However,
>>>> it doesn't place a hard restriction on the range of physical addresses that
>>>> will be used, which you need for mips.
>>>>
>>>> The impact of this clustering can be significant.  For example, on amd64 we
>>>> use 2MB page mappings to implement the direct map.  However, old Opterons
>>>> only had 8 data TLB entries for 2MB page mappings.  For a uniprocessor
>>>> kernel running on such an Opteron, I measured an 18% reduction in system
>>>> time during a buildworld with the introduction of VM_FREEPOOL_DIRECT.  (See
>>>> the commit logs for vm/vm_phys.c and the comment that precedes the
>>>> VM_NFREEORDER definition on amd64.)
>>>>
>>>> Until such time as superpage support is ported to mips from the amd64/i386
>>>> pmaps, I don't think there is a point in having more than one VM_FREEPOOL_*
>>>> on mips.  And then, the point would be to reduce fragmentation of the
>>>> physical memory that could be caused by small allocations, such as page
>>>> table pages.
>>>
>>> Thanks for the detailed explanation.
>>>
>>> Also, after looking at the code again,  I think vm_phys_alloc_contig()
>>> can optimized not to look into segments which lie outside the area of
>>> interest. The patch is:
>> [...]
>>> This change, along with the vmparam.h changes for HIGHMEM, I think we
>>> should be able to use  vm_phys_alloc_contig() for page table pages (or
>>> have I again missed something fundamental?).
>>
>> That patch was obviously wrong - many segments can map to a freelist
>> as the comment right above my change noted.
>>
>> But the idea of eliminating freelists for which all the segments are
>> outside (low,high) may still be useful, will look at this some more.
>
> I have attached a patch (also at
> http://people.freebsd.org/~jchandra/pmap-with-HIGHMEM-freelist.patch)
> which reverts most of the changes I did to convert the page table page
> allocation to use UMA zone, and replaces it with an implementation
> using vm_phys_alloc_contig() and vm_contig_grow_cache(). This creates
> a new HIGHMEM freelist for mips for memory outside the KSEG0 area, and
> makes a few changes in vm_phys_alloc_contig() to skip freelists for
> which all the segments fall outside the address range requested.
>
> With this the buildworld perf on MIPS is similar to what I got with
> the older code with zones.
>
> If this approach is okay, I will do another round of
> testing(buildworld passes, but I haven't really tested the case where
> grow_cache is called).  If the changes are not okay, I will add
> another page allocator which takes freelist as argument as you had
> suggested earlier, instead of the vm_phys_alloc_contig() changes.

Here is the alternative patch
(http://people.freebsd.org/~jchandra/pmap-with-HIGHMEM-freelist-alternate.patch).
In this all the pmap.c changes are almost exactly same as the patch
above, except that the call  to vm_phys_alloc_contig() to allocate
page table pages has been replaced with a new function
vm_phys_page_alloc().

The patch also has changes in sys/vm/vm_phys.c to:
- add vm_phys_page_alloc(int flind, int pool, int order) to allocate a
page from a freelist
- add vm_phys_alloc_freelist_pages(int flind, int pool, int order) -
which will be called by vm_phys_page_alloc() and
vm_phys_alloc_pages(), to dequeue a page of correct pool and order.
- move out page initialization code of vm_phys_alloc_contig() to
vm_page_alloc_init(), and use it in both vm_phys_page_alloc and
vm_phys_alloc_contig

I have been running buildworld on this for a few hours now (with code
to add random alloc failuers), and it seems to hold up. Let me know
you comments.

Thanks,
JC.
-------------- next part --------------
Index: sys/mips/include/vmparam.h
===================================================================
--- sys/mips/include/vmparam.h	(revision 208890)
+++ sys/mips/include/vmparam.h	(working copy)
@@ -103,8 +103,9 @@
 #define	VM_MAXUSER_ADDRESS	((vm_offset_t)0x80000000)
 #define	VM_MAX_MMAP_ADDR	VM_MAXUSER_ADDRESS
 
-#define	VM_MIN_KERNEL_ADDRESS		((vm_offset_t)0xC0000000)
-#define	VM_MAX_KERNEL_ADDRESS		((vm_offset_t)0xFFFFC000)
+#define	VM_MIN_KERNEL_ADDRESS	((vm_offset_t)0xC0000000)
+#define	VM_MAX_KERNEL_ADDRESS	((vm_offset_t)0xFFFFC000)
+#define	VM_HIGHMEM_ADDRESS	((vm_paddr_t)0x20000000)
 #if 0
 #define	KERNBASE		(VM_MIN_KERNEL_ADDRESS)
 #else
@@ -168,13 +169,15 @@
 #define	VM_FREEPOOL_DIRECT	1
 
 /*
- * we support 1 free list:
+ * we support 2 free lists:
  *
- *	- DEFAULT for all systems
+ *	- DEFAULT for direct mapped (KSEG0) pages
+ *	- HIGHMEM for other pages 
  */
 
-#define	VM_NFREELIST		1
-#define	VM_FREELIST_DEFAULT	0
+#define	VM_NFREELIST		2
+#define	VM_FREELIST_DEFAULT	1
+#define	VM_FREELIST_HIGHMEM	0
 
 /*
  * The largest allocation size is 1MB.
Index: sys/mips/mips/pmap.c
===================================================================
--- sys/mips/mips/pmap.c	(revision 208890)
+++ sys/mips/mips/pmap.c	(working copy)
@@ -184,8 +184,6 @@
 static int init_pte_prot(vm_offset_t va, vm_page_t m, vm_prot_t prot);
 static void pmap_TLB_invalidate_kernel(vm_offset_t);
 static void pmap_TLB_update_kernel(vm_offset_t, pt_entry_t);
-static vm_page_t pmap_alloc_pte_page(pmap_t, unsigned int, int, vm_offset_t *);
-static void pmap_release_pte_page(vm_page_t);
 
 #ifdef SMP
 static void pmap_invalidate_page_action(void *arg);
@@ -193,10 +191,6 @@
 static void pmap_update_page_action(void *arg);
 #endif
 
-static void pmap_ptpgzone_dtor(void *mem, int size, void *arg);
-static void *pmap_ptpgzone_allocf(uma_zone_t, int, u_int8_t *, int);
-static uma_zone_t ptpgzone;
-
 struct local_sysmaps {
 	struct mtx lock;
 	vm_offset_t base;
@@ -329,7 +323,7 @@
 }
 
 /*
- *	Bootstrap the system enough to run with virtual memory.  This
+ * Bootstrap the system enough to run with virtual memory.  This
  * assumes that the phys_avail array has been initialized.
  */
 void
@@ -535,10 +529,6 @@
 	pv_entry_max = PMAP_SHPGPERPROC * maxproc + cnt.v_page_count;
 	pv_entry_high_water = 9 * (pv_entry_max / 10);
 	uma_zone_set_obj(pvzone, &pvzone_obj, pv_entry_max);
-
-	ptpgzone = uma_zcreate("PT ENTRY", PAGE_SIZE, NULL, pmap_ptpgzone_dtor,
-	    NULL, NULL, PAGE_SIZE - 1, UMA_ZONE_NOFREE | UMA_ZONE_ZINIT);
-	uma_zone_set_allocf(ptpgzone, pmap_ptpgzone_allocf);
 }
 
 /***************************************************
@@ -885,12 +875,8 @@
 	/*
 	 * If the page is finally unwired, simply free it.
 	 */
+	vm_page_free_zero(m);
 	atomic_subtract_int(&cnt.v_wire_count, 1);
-	PMAP_UNLOCK(pmap);
-	vm_page_unlock_queues();
-	pmap_release_pte_page(m);
-	vm_page_lock_queues();
-	PMAP_LOCK(pmap);
 	return (1);
 }
 
@@ -949,96 +935,35 @@
 	bzero(&pmap->pm_stats, sizeof pmap->pm_stats);
 }
 
+
 static void
-pmap_ptpgzone_dtor(void *mem, int size, void *arg)
+pmap_grow_pte_page_cache(int wait)
 {
-#ifdef INVARIANTS
-	static char zeropage[PAGE_SIZE];
-
-	KASSERT(size == PAGE_SIZE,
-		("pmap_ptpgzone_dtor: invalid size %d", size));
-	KASSERT(bcmp(mem, zeropage, PAGE_SIZE) == 0,
-		("pmap_ptpgzone_dtor: freeing a non-zeroed page"));
-#endif
+	printf("[%s] wait %x\n", __func__, wait);
+	vm_contig_grow_cache(3, 0, MIPS_KSEG0_LARGEST_PHYS);
 }
 
-static void *
-pmap_ptpgzone_allocf(uma_zone_t zone, int bytes, u_int8_t *flags, int wait)
-{
-	vm_page_t m;
-	vm_paddr_t paddr;
-	int tries;
-	
-	KASSERT(bytes == PAGE_SIZE,
-		("pmap_ptpgzone_allocf: invalid allocation size %d", bytes));
 
-	*flags = UMA_SLAB_PRIV;
-	tries = 0;
-retry:
-	m = vm_phys_alloc_contig(1, 0, MIPS_KSEG0_LARGEST_PHYS,
-	    PAGE_SIZE, PAGE_SIZE);
-	if (m == NULL) {
-                if (tries < ((wait & M_NOWAIT) != 0 ? 1 : 3)) {
-			vm_contig_grow_cache(tries, 0, MIPS_KSEG0_LARGEST_PHYS);
-			tries++;
-			goto retry;
-		} else
-			return (NULL);
-	}
-
-	paddr = VM_PAGE_TO_PHYS(m);
-	return ((void *)MIPS_PHYS_TO_KSEG0(paddr));
-}	
-
 static vm_page_t
-pmap_alloc_pte_page(pmap_t pmap, unsigned int index, int wait, vm_offset_t *vap)
+pmap_alloc_pte_page(unsigned int index, int wait)
 {
-	vm_paddr_t paddr;
-	void *va;
 	vm_page_t m;
-	int locked;
 
-	locked = mtx_owned(&pmap->pm_mtx);
-	if (locked) {
-		mtx_assert(&vm_page_queue_mtx, MA_OWNED);
-		PMAP_UNLOCK(pmap);
-		vm_page_unlock_queues();
-	}
-	va = uma_zalloc(ptpgzone, wait);
-	if (locked) {
-		vm_page_lock_queues();
-		PMAP_LOCK(pmap);
-	}
-	if (va == NULL)
+	m = vm_phys_page_alloc(VM_FREELIST_DEFAULT, VM_FREEPOOL_DEFAULT,0);
+	if (m == NULL)
 		return (NULL);
 
-	paddr = MIPS_KSEG0_TO_PHYS(va);
-	m = PHYS_TO_VM_PAGE(paddr);
-	
-	if (!locked)
-		vm_page_lock_queues();
+	if ((m->flags & PG_ZERO) == 0)
+		pmap_zero_page(m);
+
 	m->pindex = index;
 	m->valid = VM_PAGE_BITS_ALL;
-	m->wire_count = 1;
-	if (!locked)
-		vm_page_unlock_queues();
-
 	atomic_add_int(&cnt.v_wire_count, 1);
-	*vap = (vm_offset_t)va;
+	m->wire_count = 1;
 	return (m);
 }
 
-static void
-pmap_release_pte_page(vm_page_t m)
-{
-	void *va;
-	vm_paddr_t paddr;
 
-	paddr = VM_PAGE_TO_PHYS(m);
-	va = (void *)MIPS_PHYS_TO_KSEG0(paddr);
-	uma_zfree(ptpgzone, va);
-}
-
 /*
  * Initialize a preallocated and zeroed pmap structure,
  * such as one in a vmspace structure.
@@ -1055,10 +980,10 @@
 	/*
 	 * allocate the page directory page
 	 */
-	ptdpg = pmap_alloc_pte_page(pmap, NUSERPGTBLS, M_WAITOK, &ptdva);
-	if (ptdpg == NULL)
-		return (0);
+	while ((ptdpg = pmap_alloc_pte_page(NUSERPGTBLS, M_WAITOK)) == NULL)
+	       pmap_grow_pte_page_cache(M_WAITOK);
 
+	ptdva = MIPS_PHYS_TO_KSEG0(VM_PAGE_TO_PHYS(ptdpg));
 	pmap->pm_segtab = (pd_entry_t *)ptdva;
 	pmap->pm_active = 0;
 	pmap->pm_ptphint = NULL;
@@ -1089,15 +1014,28 @@
 	/*
 	 * Find or fabricate a new pagetable page
 	 */
-	m = pmap_alloc_pte_page(pmap, ptepindex, flags, &pteva);
-	if (m == NULL)
+	if ((m = pmap_alloc_pte_page(ptepindex, flags)) == NULL) {
+		if (flags & M_WAITOK) {
+			PMAP_UNLOCK(pmap);
+			vm_page_unlock_queues();
+			pmap_grow_pte_page_cache(flags);
+			vm_page_lock_queues();
+			PMAP_LOCK(pmap);
+		}
+
+		/*
+		 * Indicate the need to retry.	While waiting, the page
+		 * table page may have been allocated.
+		 */
 		return (NULL);
+	}
 
 	/*
 	 * Map the pagetable page into the process address space, if it
 	 * isn't already there.
 	 */
 
+	pteva = MIPS_PHYS_TO_KSEG0(VM_PAGE_TO_PHYS(m));
 	pmap->pm_stats.resident_count++;
 	pmap->pm_segtab[ptepindex] = (pd_entry_t)pteva;
 
@@ -1193,7 +1131,7 @@
 
 	ptdpg->wire_count--;
 	atomic_subtract_int(&cnt.v_wire_count, 1);
-	pmap_release_pte_page(ptdpg);
+	vm_page_free_zero(ptdpg);
 	PMAP_LOCK_DESTROY(pmap);
 }
 
@@ -1203,7 +1141,6 @@
 void
 pmap_growkernel(vm_offset_t addr)
 {
-	vm_offset_t pageva;
 	vm_page_t nkpg;
 	pt_entry_t *pte;
 	int i;
@@ -1238,13 +1175,13 @@
 		/*
 		 * This index is bogus, but out of the way
 		 */
-		nkpg = pmap_alloc_pte_page(kernel_pmap, nkpt, M_NOWAIT, &pageva);
+		nkpg = pmap_alloc_pte_page(nkpt, M_NOWAIT);
 
 		if (!nkpg)
 			panic("pmap_growkernel: no memory to grow kernel");
 
 		nkpt++;
-		pte = (pt_entry_t *)pageva;
+		pte = (pt_entry_t *)MIPS_PHYS_TO_KSEG0(VM_PAGE_TO_PHYS(nkpg));
 		segtab_pde(kernel_segmap, kernel_vm_end) = (pd_entry_t)pte;
 
 		/*
Index: sys/vm/vm_phys.c
===================================================================
--- sys/vm/vm_phys.c	(revision 208890)
+++ sys/vm/vm_phys.c	(working copy)
@@ -93,7 +93,10 @@
 static int vm_phys_paddr_to_segind(vm_paddr_t pa);
 static void vm_phys_split_pages(vm_page_t m, int oind, struct vm_freelist *fl,
     int order);
+static vm_page_t vm_phys_alloc_freelist_pages(int flind, int pool, int order);
+static void vm_page_alloc_init(vm_page_t m, struct vnode **drop);
 
+
 /*
  * Outputs the state of the physical memory allocator, specifically,
  * the amount of physical memory in each free list.
@@ -293,6 +296,29 @@
 }
 
 /*
+ * Grab a page from the specified freelist with the given pool and
+ * order.
+ */
+vm_page_t
+vm_phys_page_alloc(int flind, int pool, int order)
+{
+	struct vnode *drop;
+	vm_page_t m;
+
+	mtx_lock(&vm_page_queue_free_mtx);
+	m = vm_phys_alloc_freelist_pages(flind, pool, order);
+	if (m == NULL) {
+		mtx_unlock(&vm_page_queue_free_mtx);
+		return (NULL);
+	}
+	vm_page_alloc_init(m, &drop);
+	mtx_unlock(&vm_page_queue_free_mtx);
+	if (drop)
+		vdrop(drop);
+	return (m);
+}
+
+/*
  * Allocate a contiguous, power of two-sized set of physical pages
  * from the free lists.
  *
@@ -301,49 +327,68 @@
 vm_page_t
 vm_phys_alloc_pages(int pool, int order)
 {
+	vm_page_t m;
+	int flind;
+
+	for (flind = 0; flind < vm_nfreelists; flind++) {
+		m = vm_phys_alloc_freelist_pages(flind, pool, order);
+		if (m != NULL)
+			return m;
+	}
+	return (NULL);
+}
+
+/*
+ * Find and dequeue a free page on the given free list, with the 
+ * specified pool and order
+ */
+static vm_page_t
+vm_phys_alloc_freelist_pages(int flind, int pool, int order)
+{	
 	struct vm_freelist *fl;
 	struct vm_freelist *alt;
-	int flind, oind, pind;
+	int oind, pind;
 	vm_page_t m;
 
+	KASSERT(flind < VM_NFREELIST,
+	    ("vm_phys_alloc_freelist_pages: freelist %d is out of range", flind));
 	KASSERT(pool < VM_NFREEPOOL,
-	    ("vm_phys_alloc_pages: pool %d is out of range", pool));
+	    ("vm_phys_alloc_freelist_pages: pool %d is out of range", pool));
 	KASSERT(order < VM_NFREEORDER,
-	    ("vm_phys_alloc_pages: order %d is out of range", order));
+	    ("vm_phys_alloc_freelist_pages: order %d is out of range", order));
 	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-	for (flind = 0; flind < vm_nfreelists; flind++) {
-		fl = vm_phys_free_queues[flind][pool];
-		for (oind = order; oind < VM_NFREEORDER; oind++) {
-			m = TAILQ_FIRST(&fl[oind].pl);
+
+	fl = vm_phys_free_queues[flind][pool];
+	for (oind = order; oind < VM_NFREEORDER; oind++) {
+		m = TAILQ_FIRST(&fl[oind].pl);
+		if (m != NULL) {
+			TAILQ_REMOVE(&fl[oind].pl, m, pageq);
+			fl[oind].lcnt--;
+			m->order = VM_NFREEORDER;
+			vm_phys_split_pages(m, oind, fl, order);
+			return (m);
+		}
+	}
+
+	/*
+	 * The given pool was empty.  Find the largest
+	 * contiguous, power-of-two-sized set of pages in any
+	 * pool.  Transfer these pages to the given pool, and
+	 * use them to satisfy the allocation.
+	 */
+	for (oind = VM_NFREEORDER - 1; oind >= order; oind--) {
+		for (pind = 0; pind < VM_NFREEPOOL; pind++) {
+			alt = vm_phys_free_queues[flind][pind];
+			m = TAILQ_FIRST(&alt[oind].pl);
 			if (m != NULL) {
-				TAILQ_REMOVE(&fl[oind].pl, m, pageq);
-				fl[oind].lcnt--;
+				TAILQ_REMOVE(&alt[oind].pl, m, pageq);
+				alt[oind].lcnt--;
 				m->order = VM_NFREEORDER;
+				vm_phys_set_pool(pool, m, oind);
 				vm_phys_split_pages(m, oind, fl, order);
 				return (m);
 			}
 		}
-
-		/*
-		 * The given pool was empty.  Find the largest
-		 * contiguous, power-of-two-sized set of pages in any
-		 * pool.  Transfer these pages to the given pool, and
-		 * use them to satisfy the allocation.
-		 */
-		for (oind = VM_NFREEORDER - 1; oind >= order; oind--) {
-			for (pind = 0; pind < VM_NFREEPOOL; pind++) {
-				alt = vm_phys_free_queues[flind][pind];
-				m = TAILQ_FIRST(&alt[oind].pl);
-				if (m != NULL) {
-					TAILQ_REMOVE(&alt[oind].pl, m, pageq);
-					alt[oind].lcnt--;
-					m->order = VM_NFREEORDER;
-					vm_phys_set_pool(pool, m, oind);
-					vm_phys_split_pages(m, oind, fl, order);
-					return (m);
-				}
-			}
-		}
 	}
 	return (NULL);
 }
@@ -577,6 +622,56 @@
 }
 
 /*
+ * Initialized a page that has been freshly dequeued from a freelist
+ * the caller has to drop the vnode retuned in drop, if it is not NULL
+ *
+ * To be called with vm_page_queue_free_mtx held.
+ */
+static void
+vm_page_alloc_init(vm_page_t m, struct vnode **drop)
+{
+	vm_object_t m_object;
+
+	KASSERT(m->queue == PQ_NONE,
+	    ("vm_phys_alloc_contig: page %p has unexpected queue %d",
+	    m, m->queue));
+	KASSERT(m->wire_count == 0,
+	    ("vm_phys_alloc_contig: page %p is wired", m));
+	KASSERT(m->hold_count == 0,
+	    ("vm_phys_alloc_contig: page %p is held", m));
+	KASSERT(m->busy == 0,
+	    ("vm_phys_alloc_contig: page %p is busy", m));
+	KASSERT(m->dirty == 0,
+	    ("vm_phys_alloc_contig: page %p is dirty", m));
+	KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
+	    ("vm_phys_alloc_contig: page %p has unexpected memattr %d",
+	    m, pmap_page_get_memattr(m)));
+	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
+
+	*drop = NULL;
+	if ((m->flags & PG_CACHED) != 0) {
+		m->valid = 0;
+		m_object = m->object;
+		vm_page_cache_remove(m);
+		if (m_object->type == OBJT_VNODE &&
+		    m_object->cache == NULL)
+			*drop = m_object->handle;
+	} else {
+		KASSERT(VM_PAGE_IS_FREE(m),
+		    ("vm_phys_alloc_contig: page %p is not free", m));
+		KASSERT(m->valid == 0,
+		    ("vm_phys_alloc_contig: free page %p is valid", m));
+		cnt.v_free_count--;
+	}
+	if (m->flags & PG_ZERO)
+		vm_page_zero_count--;
+	/* Don't clear the PG_ZERO flag; we'll need it later. */
+	m->flags = PG_UNMANAGED | (m->flags & PG_ZERO);
+	m->oflags = 0;
+	/* Unmanaged pages don't use "act_count". */
+}
+
+/*
  * Allocate a contiguous set of physical pages of the given size
  * "npages" from the free lists.  All of the physical pages must be at
  * or above the given physical address "low" and below the given
@@ -592,10 +687,11 @@
 {
 	struct vm_freelist *fl;
 	struct vm_phys_seg *seg;
-	vm_object_t m_object;
+	struct vnode *vp;
 	vm_paddr_t pa, pa_last, size;
 	vm_page_t deferred_vdrop_list, m, m_ret;
 	int flind, i, oind, order, pind;
+	
 
 	size = npages << PAGE_SHIFT;
 	KASSERT(size != 0,
@@ -687,50 +783,20 @@
 	vm_phys_split_pages(m_ret, oind, fl, order);
 	for (i = 0; i < npages; i++) {
 		m = &m_ret[i];
-		KASSERT(m->queue == PQ_NONE,
-		    ("vm_phys_alloc_contig: page %p has unexpected queue %d",
-		    m, m->queue));
-		KASSERT(m->wire_count == 0,
-		    ("vm_phys_alloc_contig: page %p is wired", m));
-		KASSERT(m->hold_count == 0,
-		    ("vm_phys_alloc_contig: page %p is held", m));
-		KASSERT(m->busy == 0,
-		    ("vm_phys_alloc_contig: page %p is busy", m));
-		KASSERT(m->dirty == 0,
-		    ("vm_phys_alloc_contig: page %p is dirty", m));
-		KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
-		    ("vm_phys_alloc_contig: page %p has unexpected memattr %d",
-		    m, pmap_page_get_memattr(m)));
-		if ((m->flags & PG_CACHED) != 0) {
-			m->valid = 0;
-			m_object = m->object;
-			vm_page_cache_remove(m);
-			if (m_object->type == OBJT_VNODE &&
-			    m_object->cache == NULL) {
-				/*
-				 * Enqueue the vnode for deferred vdrop().
-				 *
-				 * Unmanaged pages don't use "pageq", so it
-				 * can be safely abused to construct a short-
-				 * lived queue of vnodes.
-				 */
-				m->pageq.tqe_prev = m_object->handle;
-				m->pageq.tqe_next = deferred_vdrop_list;
-				deferred_vdrop_list = m;
-			}
-		} else {
-			KASSERT(VM_PAGE_IS_FREE(m),
-			    ("vm_phys_alloc_contig: page %p is not free", m));
-			KASSERT(m->valid == 0,
-			    ("vm_phys_alloc_contig: free page %p is valid", m));
-			cnt.v_free_count--;
+		vm_page_alloc_init(m, &vp);
+		if (vp != NULL) {
+			/*
+			 * Enqueue the vnode for deferred vdrop().
+			 *
+			 * Unmanaged pages don't use "pageq", so it
+			 * can be safely abused to construct a short-
+			 * lived queue of vnodes.
+			 */
+
+			m->pageq.tqe_prev = (void *)vp;
+			m->pageq.tqe_next = deferred_vdrop_list;
+			deferred_vdrop_list = m;
 		}
-		if (m->flags & PG_ZERO)
-			vm_page_zero_count--;
-		/* Don't clear the PG_ZERO flag; we'll need it later. */
-		m->flags = PG_UNMANAGED | (m->flags & PG_ZERO);
-		m->oflags = 0;
-		/* Unmanaged pages don't use "act_count". */
 	}
 	for (; i < roundup2(npages, 1 << imin(oind, order)); i++) {
 		m = &m_ret[i];
Index: sys/vm/vm_phys.h
===================================================================
--- sys/vm/vm_phys.h	(revision 208890)
+++ sys/vm/vm_phys.h	(working copy)
@@ -44,6 +44,7 @@
 vm_page_t vm_phys_alloc_contig(unsigned long npages,
     vm_paddr_t low, vm_paddr_t high,
     unsigned long alignment, unsigned long boundary);
+vm_page_t vm_phys_page_alloc(int flind, int pool, int order);
 vm_page_t vm_phys_alloc_pages(int pool, int order);
 vm_paddr_t vm_phys_bootstrap_alloc(vm_size_t size, unsigned long alignment);
 void vm_phys_free_pages(vm_page_t m, int order);


More information about the freebsd-mips mailing list