Is anyone using the schedgraph.d script?

Ryan Stone rysto32 at gmail.com
Tue Jan 27 03:01:53 UTC 2015


Hm, there was one bug in that script.  I uploaded a fixed version.  The fix was:

-       printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\",
state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n",
cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name,
args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm,
curthread->td_name, args[0]->td_tid);
+       printf("%d %d KTRGRAPH group:\"thread\", id:\"%s/%s tid %d\",
state:\"runq add\", attributes: prio:%d, linkedto:\"%s/%s tid %d\"\n",
cpu, timestamp, args[0]->td_proc->p_comm, args[0]->td_name,
args[0]->td_tid, args[0]->td_priority, curthread->td_proc->p_comm,
curthread->td_name, curthread->td_tid);

Note that the last printf argument used args[0] instead of curthread
as intended.


One other thing that I have noticed with the schedgraph data gathering
is that unlike KTR, in dtrace every CPU gathers its data into a
CPU-local buffer.  This can mean that a CPU that sees a large number
of scheduler events will roll over its ring buffer much more quickly
than a lightly loaded CPU.  This can lead to a confusing or misleading
schedgraph output at the beginning of the time period.  You can
mitigate this problem by allowing dtrace to allocate a larger ring
buffer with:

#pragma D option bufsize=32m

(You can potentially tune it even higher than that, but that's a good
place to start)


Finally, I've noticed that schedgraph seems to have problems
auto-detecting the clock frequency, so I tend to forcifully specify
1GHz (dtrace always outputs time in units of ns, so this is always
correct to do with dtrace-gather data)


More information about the freebsd-current mailing list