svn commit: r204297 - head/sys/powerpc/aim

Nathan Whitehorn nwhitehorn at freebsd.org
Thu Feb 25 03:56:17 UTC 2010


I can't seem to type the correct words tonight. This should have read:

  Move the OEA64 scratchpage to the end of KVA from the beginning, and set
  its PVO to map physical address 0 instead of kernelstart. This fixes a
  situation in which a user process could attempt to read this address
  via KVM, have it fault while being modified, and then panic the kernel
  because (a) it is supposed to map a valid address and (b) it lies in the
  no-fault region between VM_MIN_KERNEL_ADDRESS and virtual_avail.
  
  While here, move msgbuf and dpcpu back into regular KVA space for
  consistency with other implementations.

-Nathan


Nathan Whitehorn wrote:
> Author: nwhitehorn
> Date: Thu Feb 25 03:53:21 2010
> New Revision: 204297
> URL: http://svn.freebsd.org/changeset/base/204297
>
> Log:
>   Move the OEA64 scratchpage to the end of KVA from the beginning, and set
>   its PVO to map physical address 0 instead of kernelstart. This fixes a
>   situation in which a user process could attempt to return this address
>   via KVM, have it fault while being modified, and then panic the kernel
>   because (a) it is supposed to map a valid address and (b) it lies in the
>   no-fault region between VM_MIN_KERNEL_ADDRESS and virtual_avail.
>   
>   While here, move msgbuf and dpcpu make into regular KVA space for
>   consistency with other implementations.
>
> Modified:
>   head/sys/powerpc/aim/mmu_oea64.c
>
> Modified: head/sys/powerpc/aim/mmu_oea64.c
> ==============================================================================
> --- head/sys/powerpc/aim/mmu_oea64.c	Thu Feb 25 03:49:17 2010	(r204296)
> +++ head/sys/powerpc/aim/mmu_oea64.c	Thu Feb 25 03:53:21 2010	(r204297)
> @@ -970,10 +970,10 @@ moea64_bridge_bootstrap(mmu_t mmup, vm_o
>  
>  	mtx_init(&moea64_scratchpage_mtx, "pvo zero page", NULL, MTX_DEF);
>  	for (i = 0; i < 2; i++) {
> -		moea64_scratchpage_va[i] = virtual_avail;
> -		virtual_avail += PAGE_SIZE;
> +		moea64_scratchpage_va[i] = (virtual_end+1) - PAGE_SIZE;
> +		virtual_end -= PAGE_SIZE;
>  
> -		moea64_kenter(mmup,moea64_scratchpage_va[i],kernelstart);
> +		moea64_kenter(mmup,moea64_scratchpage_va[i],0);
>  
>  		LOCK_TABLE();
>  		moea64_scratchpage_pvo[i] = moea64_pvo_find_va(kernel_pmap,
> @@ -1004,20 +1004,25 @@ moea64_bridge_bootstrap(mmu_t mmup, vm_o
>  	 * Allocate virtual address space for the message buffer.
>  	 */
>  	pa = msgbuf_phys = moea64_bootstrap_alloc(MSGBUF_SIZE, PAGE_SIZE);
> -	msgbufp = (struct msgbuf *)msgbuf_phys;
> -	while (pa - msgbuf_phys < MSGBUF_SIZE) {
> -		moea64_kenter(mmup, pa, pa);
> +	msgbufp = (struct msgbuf *)virtual_avail;
> +	va = virtual_avail;
> +	virtual_avail += round_page(MSGBUF_SIZE);
> +	while (va < virtual_avail) {
> +		moea64_kenter(mmup, va, pa);
>  		pa += PAGE_SIZE;
> +		va += PAGE_SIZE;
>  	}
>  
>  	/*
>  	 * Allocate virtual address space for the dynamic percpu area.
>  	 */
>  	pa = moea64_bootstrap_alloc(DPCPU_SIZE, PAGE_SIZE);
> -	dpcpu = (void *)pa;
> -	while (pa - (vm_offset_t)dpcpu < DPCPU_SIZE) {
> -		moea64_kenter(mmup, pa, pa);
> +	dpcpu = (void *)virtual_avail;
> +	virtual_avail += DPCPU_SIZE;
> +	while (va < virtual_avail) {
> +		moea64_kenter(mmup, va, pa);
>  		pa += PAGE_SIZE;
> +		va += PAGE_SIZE;
>  	}
>  	dpcpu_init(dpcpu, 0);
>  }
>   



More information about the svn-src-all mailing list