svn commit: r333990 - head/sys/amd64/amd64

Konstantin Belousov kib at FreeBSD.org
Mon May 21 18:41:17 UTC 2018


Author: kib
Date: Mon May 21 18:41:16 2018
New Revision: 333990
URL: https://svnweb.freebsd.org/changeset/base/333990

Log:
  Add missed barrier for pm_gen/pm_active interaction.
  
  When we issue shootdown IPIs, we first assign zero to pm_gens to
  indicate the need to flush on the next context switch in case our IPI
  misses the context, next we read pm_active. On context switch we set
  our bit in pm_active, then we read pm_gen. It is crucial that both
  threads see the memory in the program order, otherwise invalidation
  thread might read pm_active bit as zero and the context switching
  thread might read pm_gen as zero.
  
  IA32 allows CPU for both reads to see zero. We must use the barriers
  between write and read. The pm_active bit set is already locked, so
  only the invalidation functions need it.
  
  I never saw it in real life, or at least I do not have a good
  reproduction case. I found this during code inspection when hunting
  for the Xen TLB issue reported by cperciva.
  
  Reviewed by:	alc, markj
  Sponsored by:	The FreeBSD Foundation
  MFC after:	1 week
  Differential revision:	https://reviews.freebsd.org/D15506

Modified:
  head/sys/amd64/amd64/pmap.c

Modified: head/sys/amd64/amd64/pmap.c
==============================================================================
--- head/sys/amd64/amd64/pmap.c	Mon May 21 17:33:52 2018	(r333989)
+++ head/sys/amd64/amd64/pmap.c	Mon May 21 18:41:16 2018	(r333990)
@@ -1721,6 +1721,18 @@ pmap_invalidate_page(pmap_t pmap, vm_offset_t va)
 				if (cpuid != i)
 					pmap->pm_pcids[i].pm_gen = 0;
 			}
+
+			/*
+			 * The fence is between stores to pm_gen and the read of
+			 * the pm_active mask.  We need to ensure that it is
+			 * impossible for us to miss the bit update in pm_active
+			 * and simultaneously observe a non-zero pm_gen in
+			 * pmap_activate_sw(), otherwise TLB update is missed.
+			 * Without the fence, IA32 allows such an outcome.
+			 * Note that pm_active is updated by a locked operation,
+			 * which provides the reciprocal fence.
+			 */
+			atomic_thread_fence_seq_cst();
 		}
 		mask = &pmap->pm_active;
 	}
@@ -1792,6 +1804,8 @@ pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm
 				if (cpuid != i)
 					pmap->pm_pcids[i].pm_gen = 0;
 			}
+			/* See comment int pmap_invalidate_page(). */
+			atomic_thread_fence_seq_cst();
 		}
 		mask = &pmap->pm_active;
 	}
@@ -1863,6 +1877,8 @@ pmap_invalidate_all(pmap_t pmap)
 				if (cpuid != i)
 					pmap->pm_pcids[i].pm_gen = 0;
 			}
+			/* See comment int pmap_invalidate_page(). */
+			atomic_thread_fence_seq_cst();
 		}
 		mask = &pmap->pm_active;
 	}


More information about the svn-src-head mailing list