svn commit: r355145 - head/sys/arm64/arm64

Michal Meloun meloun.michal at gmail.com
Thu Nov 28 08:48:06 UTC 2019



On 27.11.2019 21:33, Alan Cox wrote:
> Author: alc
> Date: Wed Nov 27 20:33:49 2019
> New Revision: 355145
> URL: https://svnweb.freebsd.org/changeset/base/355145
> 
> Log:
>   There is no reason why we need to pin the underlying thread to its current
>   processor in pmap_invalidate_{all,page,range}().  These functions are using
>   an instruction that broadcasts the TLB invalidation to every processor, so
>   even if a thread migrates in the middle of one of these functions every
>   processor will still perform the required TLB invalidations.
I think this is not the right assumption. The problem is not in TLB
operations themselves, but in following 'dsb' and / or 'isb'. 'dsb'
ensures that all TLB operation transmitted by the local CPU is performed
and visible to other observers. But it does nothing with TLBs emitted by
other CPUs.
For example, if a given thread is rescheduled after all TLB operations
but before 'dsb' or 'isb' is performed, then the requested
synchronization does not occur at all.

Michal

>   
>   Reviewed by:	andrew, markj
>   MFC after:	10 days
>   Differential Revision:	https://reviews.freebsd.org/D22502
> 
> Modified:
>   head/sys/arm64/arm64/pmap.c
> 
> Modified: head/sys/arm64/arm64/pmap.c
> ==============================================================================
> --- head/sys/arm64/arm64/pmap.c	Wed Nov 27 20:32:53 2019	(r355144)
> +++ head/sys/arm64/arm64/pmap.c	Wed Nov 27 20:33:49 2019	(r355145)
> @@ -1043,7 +1043,6 @@ pmap_invalidate_page(pmap_t pmap, vm_offset_t va)
>  {
>  	uint64_t r;
>  
> -	sched_pin();
>  	dsb(ishst);
>  	if (pmap == kernel_pmap) {
>  		r = atop(va);
> @@ -1054,11 +1053,10 @@ pmap_invalidate_page(pmap_t pmap, vm_offset_t va)
>  	}
>  	dsb(ish);
>  	isb();
> -	sched_unpin();
>  }
>  
>  static __inline void
> -pmap_invalidate_range_nopin(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
> +pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
>  {
>  	uint64_t end, r, start;
>  
> @@ -1080,20 +1078,10 @@ pmap_invalidate_range_nopin(pmap_t pmap, vm_offset_t s
>  }
>  
>  static __inline void
> -pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva)
> -{
> -
> -	sched_pin();
> -	pmap_invalidate_range_nopin(pmap, sva, eva);
> -	sched_unpin();
> -}
> -
> -static __inline void
>  pmap_invalidate_all(pmap_t pmap)
>  {
>  	uint64_t r;
>  
> -	sched_pin();
>  	dsb(ishst);
>  	if (pmap == kernel_pmap) {
>  		__asm __volatile("tlbi vmalle1is");
> @@ -1103,7 +1091,6 @@ pmap_invalidate_all(pmap_t pmap)
>  	}
>  	dsb(ish);
>  	isb();
> -	sched_unpin();
>  }
>  
>  /*
> @@ -3114,7 +3101,7 @@ pmap_update_entry(pmap_t pmap, pd_entry_t *pte, pd_ent
>  	 * lookup the physical address.
>  	 */
>  	pmap_clear_bits(pte, ATTR_DESCR_VALID);
> -	pmap_invalidate_range_nopin(pmap, va, va + size);
> +	pmap_invalidate_range(pmap, va, va + size);
>  
>  	/* Create the new mapping */
>  	pmap_store(pte, newpte);
> 


More information about the svn-src-all mailing list