mmu_map.h review/justification - was => RE: svn commit: r235904 - in projects/amd64_xen_pv/sys: amd64/xen conf

cherry at zyx.in cherry at zyx.in
Thu May 24 17:02:23 UTC 2012


Hi,

Apologies for the top-post - I'm having mail client difficulties at the
moment.

I have recieved private feedback about the contents of the commit log
message, so I thought I would address them here, and keep it in mind for
future commits. The main question is "rationale/justification" for the api
in the diff below, and I will try to address that here:

Facts:
- The x86 MMU hardware works with a hierarchical page table. Pages in the
hierarchy are referred to each other by physical addresses.
- The processor itself however accesses all memory via virtual addresses.

What this means is that in order for the kernel to manipulate page tables
entries, it needs to have the pages in the hierarchy themselves to be
mapped into its virtual address space. However, this requires that the page
tables be setup already to facilitate this - clearly a bootstrap problem.

In order to address this problem, I've attempted to come up with an API
that is easy to use, who's objective is to do "whatever it takes" to the
MMU, to make the pages in the physical hierarchy visible to the kernel
virtual address space.

The use of the api is very intuitive. One basically says to it: "Do
whatever it takes to make the mapping va->pa viable". On x86, this
basically means setting up the page table hierarchy (if this has not
already been done) and providing a means for the caller to inspect the
contents of the backing pages. On other architectures, with software TLBs,
for eg:, I would envisage the implementation returning a direct mapped
segment offset to the page, so that the caller can then insert this into
the soft TLB. Admittedly I haven't given too much thought to it beyond that
- I've only attempted to not restrict the API design to the x86 paging
architecture alone, as far as is possible.

In order for the implementation to do "whatever it takes", the caller is
provided with a set of callbacks that allocate resources if required. On
x86, the callbacks need to make sure that the returned memory which will be
used for backing page tables is already mapped into the kernel address
space. On xen, we have the additional requirement that page tables are not
mapped in into the KVA space as writeable pages. On the upside, xen
provides us with about 512kb of already mapped in memory at boot - so we
just lop off chunks of it via vallocpages() (see pmap.c). That's it,
really. It's a fairly intuitive api once the background is in place. On the
down side, it's bound to be ridiculously unoptimal, and I imagine there are
other subsystems already in the kernel which provide this functionality -
albeit unavailable at boot ( the reason I wrote this api ). As I get along
with the port, I will make a decision ( open to feedback here ) about the
future of the api within the amd64/xen subdirectory.

I don't expect this api to be used outside of the x86 architecture, or
outside of the amd64/xen port at this point, but I've written it with a
view to it being useful, if possible.

I hope that was a useful explanation.

Many Thanks,

Cherry.


Original Message:
-----------------
From: Cherry G. Mathew cherry at FreeBSD.org
Date: Thu, 24 May 2012 12:02:11 +0000 (UTC)
To: src-committers at freebsd.org, svn-src-projects at freebsd.org
Subject: svn commit: r235904 - in projects/amd64_xen_pv/sys: amd64/xen conf


Author: cherry
Date: Thu May 24 12:02:10 2012
New Revision: 235904
URL: http://svn.freebsd.org/changeset/base/235904

Log:
  This API is an attempt to abstract the MMU state in an MI fashion. It is
heavily wip and may or may not go away, from amd64/ depending on how things
go with the direct mapped implementation

Added:
  projects/amd64_xen_pv/sys/amd64/xen/mmu_map.c
  projects/amd64_xen_pv/sys/amd64/xen/mmu_map.h
Modified:
  projects/amd64_xen_pv/sys/conf/files.amd64

Added: projects/amd64_xen_pv/sys/amd64/xen/mmu_map.c
============================================================================
==
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ projects/amd64_xen_pv/sys/amd64/xen/mmu_map.c	Thu May 24 12:02:10
2012	(r235904)
@@ -0,0 +1,389 @@
+/* $FreeBSD$ */
+/*-
+ * Copyright (c) 2011-2012 Spectra Logic Corporation
+ * All rights reserved.
+ *
+ * This software was developed by Cherry G. Mathew <cherry at FreeBSD.org>
+ * under sponsorship from Spectra Logic Corporation.
+ * 
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ *    substantially similar to the "NO WARRANTY" disclaimer below
+ *    ("Disclaimer") and any redistribution must be conditioned upon
+ *    including a substantially similar Disclaimer requirement for further
+ *    binary redistribution.
+ *
+ * NO WARRANTY
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGES.
+ */
+
+
+/*
+ * This file implements the API that manages the page table
+ * hierarchy for the amd64 Xen pmap.
+ */
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include "opt_cpu.h"
+#include "opt_pmap.h"
+#include "opt_smp.h"
+
+
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/types.h>
+
+#include <vm/vm.h>
+#include <vm/vm_param.h>
+#include <vm/pmap.h>
+
+#include <xen/hypervisor.h>
+#include <machine/xen/xenvar.h>
+
+#include <amd64/xen/mmu_map.h>
+
+static int
+pml4t_index(vm_offset_t va)
+{
+	/* amd64 sign extends 48th bit and upwards */
+	const uint64_t SIGNMASK = (1UL << 48) - 1;
+	va &= SIGNMASK; /* Remove sign extension */
+
+	return (va >> PML4SHIFT); 
+}
+
+static int
+pdpt_index(vm_offset_t va)
+{
+	/* amd64 sign extends 48th bit and upwards */
+	const uint64_t SIGNMASK = (1UL << 48) - 1;
+	va &= SIGNMASK; /* Remove sign extension */
+
+	return ((va & PML4MASK) >> PDPSHIFT);
+}
+
+static int
+pdt_index(vm_offset_t va)
+{
+	/* amd64 sign extends 48th bit and upwards */
+	const uint64_t SIGNMASK = (1UL << 48) - 1;
+	va &= SIGNMASK; /* Remove sign extension */
+
+	return ((va & PDPMASK) >> PDRSHIFT);
+}
+
+/* 
+ * The table get functions below assume that a table cannot exist at
+ * address 0
+ */
+static pml4_entry_t *
+pmap_get_pml4t(struct pmap *pm)
+{
+	KASSERT(pm != NULL,
+		("NULL pmap passed in!\n"));
+
+	pml4_entry_t *pm_pml4 = pm->pm_pml4;
+	
+	KASSERT(pm_pml4 != NULL,
+		("pmap has NULL pml4!\n"));
+
+	return pm->pm_pml4;
+}
+
+/* Returns physical address */
+static vm_paddr_t
+pmap_get_pdpt(vm_offset_t va, pml4_entry_t *pml4t)
+{
+	pml4_entry_t pml4e;
+
+	KASSERT(va <= VM_MAX_KERNEL_ADDRESS,
+		("invalid address requested"));
+	KASSERT(pml4t != 0, ("pml4t cannot be zero"));
+
+	pml4e = pml4t[pml4t_index(va)];
+
+	if (!(pml4e & PG_V)) {
+		return 0;
+	}
+
+	return xpmap_mtop(pml4e & PG_FRAME);
+}
+
+/* Returns physical address */
+static vm_paddr_t
+pmap_get_pdt(vm_offset_t va, pdp_entry_t *pdpt)
+{
+	pdp_entry_t pdpe;
+
+	KASSERT(va <= VM_MAX_KERNEL_ADDRESS,
+		("invalid address requested"));
+	KASSERT(pdpt != 0, ("pdpt cannot be zero"));
+
+	pdpe = pdpt[pdpt_index(va)];
+
+	if (!(pdpe & PG_V)) {
+		return 0;
+	}
+
+	return xpmap_mtop(pdpe & PG_FRAME);
+}
+
+/* Returns physical address */
+static vm_paddr_t
+pmap_get_pt(vm_offset_t va, pd_entry_t *pdt)
+{
+	pd_entry_t pdte;
+
+	KASSERT(va <= VM_MAX_KERNEL_ADDRESS,
+		("invalid address requested"));
+
+	KASSERT(pdt != 0, ("pdt cannot be zero"));
+
+	pdte = pdt[pdt_index(va)];
+
+	if (!(pdte & PG_V)) {
+		return 0;
+	}
+
+	return xpmap_mtop(pdte & PG_FRAME);
+}
+
+/* 
+ * This structure defines the 4 indices that a given virtual
+ * address lookup would traverse.
+ *
+ * Note: this structure is opaque to API customers. Callers give us an
+ * untyped array which is marshalled/unmarshalled inside of the
+ * stateful api.
+ */
+
+static const uint64_t SANE = 0xcafebabe;
+
+struct mmu_map_index {
+	pml4_entry_t *pml4t; /* Page Map Level 4 Table */
+	pdp_entry_t *pdpt;  /* Page Directory Pointer Table */
+	pd_entry_t *pdt;   /* Page Directory Table */
+	pt_entry_t *pt;    /* Page Table */
+
+	struct mmu_map_mbackend ptmb; /* Backend info */
+
+	uint32_t sanity; /* 32 bit (for alignment) magic XXX:
+			  * Make optional on DEBUG */
+};
+
+size_t
+mmu_map_t_size(void)
+{
+	return sizeof (struct mmu_map_index);
+}
+
+void
+mmu_map_t_init(void *addr, struct mmu_map_mbackend *mb)
+{
+	KASSERT((addr != NULL) && (mb != NULL), ("NULL args given!"));
+	struct mmu_map_index *pti = addr;
+	KASSERT(pti->sanity != SANE, ("index initialised twice!"));
+	KASSERT(mb->alloc != NULL &&
+		mb->ptov != NULL &&
+		mb->vtop != NULL, 
+		("initialising with pre-registered alloc routine active"));
+
+	pti->ptmb = *mb;
+
+	/* Backend page allocation should provide default VA mapping */
+	pti->sanity = SANE;
+}
+
+void mmu_map_t_fini(void *addr)
+{
+	KASSERT(addr != NULL, ("NULL args given!"));
+
+	struct mmu_map_index *pti = addr;
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+	struct mmu_map_mbackend *mb = &pti->ptmb;
+
+	pti->sanity = 0;
+
+	if (mb->free != NULL) {
+		/* XXX: go through PT hierarchy and free + unmap
+		 * unused tables */ 
+	}
+}
+
+pd_entry_t *
+mmu_map_pml4t(void *addr)
+{
+	KASSERT(addr != NULL, ("NULL args given!"));
+	struct mmu_map_index *pti = addr;
+
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	return pti->pml4t;
+}
+
+pd_entry_t *
+mmu_map_pdpt(void *addr)
+{
+	KASSERT(addr != NULL, ("NULL args given!"));
+	struct mmu_map_index *pti = addr;
+
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	return pti->pdpt;
+}
+
+pd_entry_t *
+mmu_map_pdt(void *addr)
+{
+	KASSERT(addr != NULL, ("NULL args given!"));
+	struct mmu_map_index *pti = addr;
+
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	return pti->pdt;
+}
+
+pd_entry_t *
+mmu_map_pt(void *addr)
+{
+	KASSERT(addr != NULL, ("NULL args given!"));
+	struct mmu_map_index *pti = addr;
+
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	return pti->pt;
+}
+
+bool
+mmu_map_inspect_va(struct pmap *pm, void *addr, vm_offset_t va)
+{
+	KASSERT(addr != NULL && pm != NULL, ("NULL arg(s) given"));
+
+	struct mmu_map_index *pti = addr;
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	vm_paddr_t pt;
+
+	pti->pml4t = pmap_get_pml4t(pm);
+
+	pt = pmap_get_pdpt(va, pti->pml4t);
+
+	if (pt == 0) {
+		return false;
+	} else {
+		pti->pdpt = (pdp_entry_t *) pti->ptmb.ptov(pt);
+	}
+
+	pt = pmap_get_pdt(va, pti->pdpt);
+
+	if (pt == 0) {
+		return false;
+	} else {
+		pti->pdt = (pd_entry_t *) pti->ptmb.ptov(pt);
+	}
+
+	pt = pmap_get_pt(va, pti->pdt);
+
+	if (pt == 0) {
+		return false;
+	} else {
+		pti->pt = (pt_entry_t *)pti->ptmb.ptov(pt);
+	}
+
+	return true;
+}
+extern uint64_t xenstack; /* The stack Xen gives us at boot */
+void
+mmu_map_hold_va(struct pmap *pm, void *addr, vm_offset_t va)
+{
+	KASSERT(addr != NULL && pm != NULL, ("NULL arg(s) given"));
+
+	struct mmu_map_index *pti = addr;
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	vm_paddr_t pt;
+
+	pti->pml4t = pmap_get_pml4t(pm);
+
+	pt = pmap_get_pdpt(va, pti->pml4t);
+
+	if (pt == 0) {
+		pml4_entry_t *pml4tep;
+		vm_paddr_t pml4tep_ma;
+		pml4_entry_t pml4te;
+
+		pti->pdpt = (pdp_entry_t *)pti->ptmb.alloc(PAGE_SIZE);
+
+		pml4tep = &pti->pml4t[pml4t_index(va)];
+		pml4tep_ma = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pml4tep));
+		pml4te = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pti->pdpt)) | PG_RW |
PG_V | PG_U; /* XXX: revisit flags */
+		xen_queue_pt_update(pml4tep_ma, pml4te);
+
+	} else {
+		pti->pdpt = (pdp_entry_t *) pti->ptmb.ptov(pt);
+	}
+
+	pt = pmap_get_pdt(va, pti->pdpt);
+
+	if (pt == 0) {
+		pdp_entry_t *pdptep;
+		vm_paddr_t pdptep_ma;
+		pdp_entry_t pdpte;
+
+		pti->pdt = (pd_entry_t *)pti->ptmb.alloc(PAGE_SIZE);
+
+		pdptep = &pti->pdpt[pdpt_index(va)];
+		pdptep_ma = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pdptep));
+		pdpte = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pti->pdt)) | PG_RW | PG_V
| PG_U; /*	XXX: revisit flags */
+		xen_queue_pt_update(pdptep_ma, pdpte);
+		
+	} else {
+		pti->pdt = (pd_entry_t *) pti->ptmb.ptov(pt);
+	}
+
+	pt = pmap_get_pt(va, pti->pdt);
+
+	if (pt == 0) {
+		pd_entry_t *pdtep;
+		vm_paddr_t pdtep_ma;
+		pd_entry_t pdte;
+
+		pti->pt = (pt_entry_t *) pti->ptmb.alloc(PAGE_SIZE);
+
+		pdtep = &pti->pdt[pdt_index(va)];
+		pdtep_ma = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pdtep));
+		pdte = xpmap_ptom(pti->ptmb.vtop((vm_offset_t)pti->pt)) | PG_RW | PG_V |
PG_U; /*	XXX: revisit flags */
+		xen_queue_pt_update(pdtep_ma, pdte);
+
+	} else {
+		pti->pt = (pt_entry_t *) pti->ptmb.ptov(pt);
+	}
+}
+
+void
+mmu_map_release_va(struct pmap *pm, void *addr, vm_offset_t va)
+{
+
+	KASSERT(addr != NULL && pm != NULL, ("NULL arg(s) given"));
+
+	struct mmu_map_index *pti = addr;
+	KASSERT(pti->sanity == SANE, ("Uninitialised index cookie used"));
+
+	/* XXX: */
+}

Added: projects/amd64_xen_pv/sys/amd64/xen/mmu_map.h
============================================================================
==
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ projects/amd64_xen_pv/sys/amd64/xen/mmu_map.h	Thu May 24 12:02:10
2012	(r235904)
@@ -0,0 +1,152 @@
+/* $FreeBSD$ */
+/*-
+ * Copyright (c) 2011-2012 Spectra Logic Corporation
+ * All rights reserved.
+ *
+ * This software was developed by Cherry G. Mathew <cherry at FreeBSD.org>
+ * under sponsorship from Spectra Logic Corporation.
+ * 
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ *    substantially similar to the "NO WARRANTY" disclaimer below
+ *    ("Disclaimer") and any redistribution must be conditioned upon
+ *    including a substantially similar Disclaimer requirement for further
+ *    binary redistribution.
+ *
+ * NO WARRANTY
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGES.
+ */
+
+#ifndef _XEN_MMU_MAP_H_
+#define _XEN_MMU_MAP_H_
+
+#include <sys/types.h>
+
+#include <machine/pmap.h>
+
+/* 
+ *
+ * This API abstracts, in an MI fashion, the paging mechanism of an
+ * arbitrary CPU architecture as an opaque FSM, which may then be
+ * subject to inspection in MD ways.
+ *
+ * Use of this API can have the following effects on the VM system and
+ * the kernel address space:
+ *
+ * - physical memory pages may be allocated.
+ * - physical memory pages may be de-allocated.
+ * - kernel virtual address space may be allocated.
+ * - kernel virtual address space may be de-allocated.
+ * - The page table hierarchy may be modified.
+ * - TLB entries may be invalidated.
+ *
+ * The API is stateful, and designed around the following principles:
+ * - Simplicity
+ * - Object orientation
+ * - Code reuse.
+ */
+
+/* 
+ * We hide the page table structure behind an opaque "index" cookie
+ * which acts as the "key" to a given va->pa mapping being inspected.
+ */
+typedef void * mmu_map_t;
+
+/*
+ * Memory backend types:
+ * 
+ * We provide a means to allocate ad-hoc memory/physical page
+ * requirements to the paging mechanism by means of a "backend"
+ * alloc function
+ *
+ * The memory backend is required to provide physical pages that are
+ * at least temporarily mapped into the kernel VA space and whose
+ * contents are thus accessible by a simple pointer indirection from
+ * within the kernel. This requirement may be revoked after conclusion
+ * of an instance of stateful usage of the API ( See:
+ * mmu_map_t_fini() below ), at which point the backend
+ * implementation is free to unmap any temporary mappings if so
+ * desired. (XXX: review this for non-x86)
+ *
+ * Note: Only the mappings may be revoked - any physical pages
+ * themselves allocated by the backend are considered allocated and
+ * part of the paging mechanism.
+ */
+
+struct mmu_map_mbackend { /* Callbacks */
+
+	vm_offset_t (*alloc)(size_t);
+	void (*free)(vm_offset_t); /* May be NULL */
+
+	/* 
+	 * vtop()/ptov() conversion functions:
+	 * These callbacks typically provide conversions for mapped
+	 * pages allocated via the alloc()/free() callbacks (above).
+	 * The API implementation is free to cache the mappings across
+	 * multiple instances of use; ie; mappings may persist across 
+	 * one pair of mmu_map_t_init()/.._finit() calls.
+	 */
+	vm_offset_t (*ptov)(vm_paddr_t);
+	vm_paddr_t (*vtop)(vm_offset_t);
+};
+
+/* 
+ * Return sizeof (mmu_map_t) as implemented within the api
+ * This may then be used to allocate untyped memory for the cookie
+ * which can then be operated on opaquely behind the API in a machine
+ * specific manner.
+ */
+size_t mmu_map_t_size(void);
+
+/*
+ * Initialise the API state to use a specified memory backend 
+ */
+void mmu_map_t_init(mmu_map_t, struct mmu_map_mbackend *);
+
+/* Conclude this instance of use of the API */
+void mmu_map_t_fini(mmu_map_t);
+
+/* Set "index" cookie state based on va lookup. This state may then be
+ * inspected in MD ways ( See below ). Note that every call to the
+ * following functions can change the state of the backing paging
+ * mechanism FSM.
+ */
+bool mmu_map_inspect_va(struct pmap *, mmu_map_t, vm_offset_t);
+/* 
+ * Unconditionally allocate resources to setup and "inspect" (as
+ * above) a given va->pa mapping 
+ */
+void mmu_map_hold_va(struct pmap *,  mmu_map_t, vm_offset_t);
+
+/* Optionally release resources after tear down of a va->pa mapping */
+void mmu_map_release_va(struct pmap *, mmu_map_t, vm_offset_t);
+
+/* 
+ * Machine dependant "view" into the page table hierarchy FSM.
+ * On amd64, there are four tables that are consulted for a va->pa
+ * translation. This information may be extracted by the MD functions
+ * below and is only considered valid between a successful call to
+ * mmu_map_inspect_va() or mmu_map_hold_va() and a subsequent
+ * call to mmu_map_release_va()
+ */
+pd_entry_t * mmu_map_pml4t(mmu_map_t); /* Page Map Level 4 Table */
+pd_entry_t * mmu_map_pdpt(mmu_map_t);  /* Page Directory Pointer Table */
+pd_entry_t * mmu_map_pdt(mmu_map_t);   /* Page Directory Table */
+pd_entry_t * mmu_map_pt(mmu_map_t);    /* Page Table */
+
+#endif /*  !_XEN_MMU_MAP_H_ */

Modified: projects/amd64_xen_pv/sys/conf/files.amd64
============================================================================
==
--- projects/amd64_xen_pv/sys/conf/files.amd64	Thu May 24 11:52:57
2012	(r235903)
+++ projects/amd64_xen_pv/sys/conf/files.amd64	Thu May 24 12:02:10
2012	(r235904)
@@ -128,6 +128,7 @@ amd64/amd64/mpboot.S		optional	native sm
 amd64/xen/mpboot.c		optional	xen smp
 amd64/amd64/pmap.c		optional	native
 amd64/xen/pmap.c		optional	xen
+amd64/xen/mmu_map.c		optional	xen
 amd64/amd64/prof_machdep.c	optional	profiling-routine
 amd64/amd64/ptrace_machdep.c	standard
 amd64/amd64/sigtramp.S		standard


--------------------------------------------------------------------
mail2web LIVE – Free email based on Microsoft® Exchange technology -
http://link.mail2web.com/LIVE




More information about the svn-src-projects mailing list