Review needed
Alan Cox
alc at rice.edu
Sun May 6 17:43:13 UTC 2012
On 05/05/2012 01:45, Warner Losh wrote:
> I just tried to bring up an old Gateworks AVILA board that I'd acquired a long time ago....
>
> After fighting through a few bugs in the nanobsd build, I managed to get an image.
>
> That image failed to boot. When it harvested entropy, it just started printing pmap_mincore() over and over again until I hit ^T enough to let the rest of the boot process continue. Sure enough, arm's pmap_mincore() returns 0 after printing this.
>
> So, I cobbled together the following after looking at mips and amd64's pmap_mincore() functions.
>
> I've just cut and pasted it here since I think that might be easier to review than a diff, but I've also uploaded that as well to http://people.freebsd.org/~imp/arm-pmap_mincore.diff
>
> Comments?
>
> Warner
>
> /*
> * perform the pmap work for mincore
> */
> int
> pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa)
> {
> struct l2_bucket *l2b;
> pt_entry_t *ptep, pte;
> vm_paddr_t pa;
> vm_page_t m;
> int val;
> boolean_t managed;
>
> PMAP_LOCK(pmap);
> retry:
> l2b = pmap_get_l2_bucket(pmap, addr);
> if (l2b == NULL) {
> val = 0;
> goto out;
> }
> ptep =&l2b->l2b_kva[l2pte_index(addr)];
> pte = (ptep != NULL) ? *ptep : 0;
> if (!l2pte_valid(pte)) {
> val = 0;
> goto out;
> }
> val = MINCORE_INCORE;
> if (pte& L2_S_PROT_W)
> val |= MINCORE_MODIFIED | MINCORE_MODIFIED_OTHER;
> managed = false;
> pa = l2pte_pa(pte);
> m = PHYS_TO_VM_PAGE(pa);
> if (m != NULL&& !(m->oflags& VPO_UNMANAGED))
> managed = true;
> if (managed) {
> /*
> * This may falsely report the given address as
> * MINCORE_REFERENCED. Unfortunately, due to the lack of
> * per-PTE reference information, it is impossible to
> * determine if the address is MINCORE_REFERENCED.
> */
> if ((m->aflags& PGA_REFERENCED) != 0)
I believe that you want to use "if ((m->md.pvh_attrs & PVF_REF) != 0)"
here instead.
Strictly speaking, the ARM pmap tries to maintain a per-mapping
reference bit. The trouble is that it's kept in the PV entry, not the
PTE, so it's costly to access here. You would need to acquire the page
queues lock, call pmap_find_pv(), and introduce a custom version of
vm_page_pa_tryrelock() that releases and reacquires the page queues
lock. In the end, I doubt it's worthwhile. However, it would make
sense for the comment to mention this.
> val |= MINCORE_REFERENCED | MINCORE_REFERENCED_OTHER;
> }
> if ((val& (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)) !=
> (MINCORE_MODIFIED_OTHER | MINCORE_REFERENCED_OTHER)&& managed) {
> /* Ensure that "PHYS_TO_VM_PAGE(pa)->object" doesn't change. */
> if (vm_page_pa_tryrelock(pmap, pa, locked_pa))
> goto retry;
> } else
> out:
> PA_UNLOCK_COND(*locked_pa);
> PMAP_UNLOCK(pmap);
> return (val);
> }
>
>
More information about the freebsd-arm
mailing list