svn commit: r283123 - head/sys/dev/hwpmc

John Baldwin jhb at FreeBSD.org
Tue May 19 19:15:20 UTC 2015


Author: jhb
Date: Tue May 19 19:15:19 2015
New Revision: 283123
URL: https://svnweb.freebsd.org/changeset/base/283123

Log:
  Fix two bugs that could result in PMC sampling effectively stopping.
  In both cases, the the effect of the bug was that a very small positive
  number was written to the counter. This means that a large number of
  events needed to occur before the next sampling interrupt would trigger.
  Even with very frequently occurring events like clock cycles wrapping all
  the way around could take a long time. Both bugs occurred when updating
  the saved reload count for an outgoing thread on a context switch.
  
  First, the counter-independent code compares the current reload count
  against the count set when the thread switched in and generates a delta
  to apply to the saved count. If this delta causes the reload counter
  to go negative, it would add a full reload interval to wrap it around to
  a positive value. The fix is to add the full reload interval if the
  resulting counter is zero.
  
  Second, occasionally the raw counter value read during a context switch
  has actually wrapped, but an interrupt has not yet triggered. In this
  case the existing logic would return a very large reload count (e.g.
  2^48 - 2 if the counter had overflowed by a count of 2). This was seen
  both for fixed-function and programmable counters on an E5-2643.
  Workaround this case by returning a reload count of zero.
  
  PR:		198149
  Differential Revision:	https://reviews.freebsd.org/D2557
  Reviewed by:	emaste
  MFC after:	1 week
  Sponsored by:	Norse Corp, Inc.

Modified:
  head/sys/dev/hwpmc/hwpmc_core.c
  head/sys/dev/hwpmc/hwpmc_mod.c

Modified: head/sys/dev/hwpmc/hwpmc_core.c
==============================================================================
--- head/sys/dev/hwpmc/hwpmc_core.c	Tue May 19 19:01:52 2015	(r283122)
+++ head/sys/dev/hwpmc/hwpmc_core.c	Tue May 19 19:15:19 2015	(r283123)
@@ -203,6 +203,10 @@ core_pcpu_fini(struct pmc_mdep *md, int 
 static pmc_value_t
 iaf_perfctr_value_to_reload_count(pmc_value_t v)
 {
+
+	/* If the PMC has overflowed, return a reload count of zero. */
+	if ((v & (1ULL << (core_iaf_width - 1))) == 0)
+		return (0);
 	v &= (1ULL << core_iaf_width) - 1;
 	return (1ULL << core_iaf_width) - v;
 }
@@ -1806,6 +1810,10 @@ static const int niap_events = sizeof(ia
 static pmc_value_t
 iap_perfctr_value_to_reload_count(pmc_value_t v)
 {
+
+	/* If the PMC has overflowed, return a reload count of zero. */
+	if ((v & (1ULL << (core_iap_width - 1))) == 0)
+		return (0);
 	v &= (1ULL << core_iap_width) - 1;
 	return (1ULL << core_iap_width) - v;
 }

Modified: head/sys/dev/hwpmc/hwpmc_mod.c
==============================================================================
--- head/sys/dev/hwpmc/hwpmc_mod.c	Tue May 19 19:01:52 2015	(r283122)
+++ head/sys/dev/hwpmc/hwpmc_mod.c	Tue May 19 19:15:19 2015	(r283123)
@@ -1435,7 +1435,7 @@ pmc_process_csw_out(struct thread *td)
 					tmp += pm->pm_sc.pm_reloadcount;
 				mtx_pool_lock_spin(pmc_mtxpool, pm);
 				pp->pp_pmcs[ri].pp_pmcval -= tmp;
-				if ((int64_t) pp->pp_pmcs[ri].pp_pmcval < 0)
+				if ((int64_t) pp->pp_pmcs[ri].pp_pmcval <= 0)
 					pp->pp_pmcs[ri].pp_pmcval +=
 					    pm->pm_sc.pm_reloadcount;
 				mtx_pool_unlock_spin(pmc_mtxpool, pm);


More information about the svn-src-all mailing list