svn commit: r271707 - stable/10/sys/kern

Alexander Motin mav at FreeBSD.org
Wed Sep 17 14:06:21 UTC 2014


Author: mav
Date: Wed Sep 17 14:06:21 2014
New Revision: 271707
URL: http://svnweb.freebsd.org/changeset/base/271707

Log:
  MFC r271604, r271616:
  Add couple memory barriers to order tdq_cpu_idle and tdq_load accesses.
  
  This change fixes transient performance drops in some of my benchmarks,
  vanishing as soon as I am trying to collect any stats from the scheduler.
  It looks like reordered access to those variables sometimes caused loss of
  IPI_PREEMPT, that delayed thread execution until some later interrupt.
  
  Approved by:	re (marius)

Modified:
  stable/10/sys/kern/sched_ule.c
Directory Properties:
  stable/10/   (props changed)

Modified: stable/10/sys/kern/sched_ule.c
==============================================================================
--- stable/10/sys/kern/sched_ule.c	Wed Sep 17 08:47:58 2014	(r271706)
+++ stable/10/sys/kern/sched_ule.c	Wed Sep 17 14:06:21 2014	(r271707)
@@ -1037,6 +1037,14 @@ tdq_notify(struct tdq *tdq, struct threa
 	ctd = pcpu_find(cpu)->pc_curthread;
 	if (!sched_shouldpreempt(pri, ctd->td_priority, 1))
 		return;
+
+	/*
+	 * Make sure that tdq_load updated before calling this function
+	 * is globally visible before we read tdq_cpu_idle.  Idle thread
+	 * accesses both of them without locks, and the order is important.
+	 */
+	mb();
+
 	if (TD_IS_IDLETHREAD(ctd)) {
 		/*
 		 * If the MD code has an idle wakeup routine try that before
@@ -2645,6 +2653,12 @@ sched_idletd(void *dummy)
 
 		/* Run main MD idle handler. */
 		tdq->tdq_cpu_idle = 1;
+		/*
+		 * Make sure that tdq_cpu_idle update is globally visible
+		 * before cpu_idle() read tdq_load.  The order is important
+		 * to avoid race with tdq_notify.
+		 */
+		mb();
 		cpu_idle(switchcnt * 4 > sched_idlespinthresh);
 		tdq->tdq_cpu_idle = 0;
 


More information about the svn-src-all mailing list