hwpmc granularity and 6.4 network performance
Vadim Goncharov
vadim_nuclight at mail.ru
Mon Nov 24 03:29:23 PST 2008
Hi!
I've recently perfromed upgrade of busy production router from 6.2 to 6.4-PRE.
I have added two lines to my kernel config and did usual make buildkernel:
device hwpmc # Driver (also a loadable module)
options HWPMC_HOOKS # Other necessary kernel hooks
After rebooting with new world and kernel, I've noticed that CPU load has
slightly increased (not measured, it is different every second anyway, as
users do not genereate steady traffic), and in top -S 'swi1: net' became
often in state *Giant, but it not used to do so on 6.2, while kernel config
did not changed much, and device polling is still used. What could happen
to this?
Another question, I've read "Sixty second HWPMC howto" and tried to find out
what exactly eats my CPU. BTW, that instruction did not apply exactly on my
machine, this is what I did:
# cd /tmp
# pmcstat -S instructions -O /tmp/sample.out
# pmcstat -R /tmp/sample.out -k /boot/kernel/kernel -g
# gprof /boot/kernel/kernel p4-instr-retired/kernel.gmon > kernel.gmon.result
Now in file kernel.gmon.result I see the following:
granularity: each sample hit covers 4 byte(s) for 0.00% of 692213.00 seconds
called/total parents
index %time self descendents called+self name index
called/total children
<spontaneous>
[1] 31.7 219129.00 0.00 ipfw_chk [1]
-----------------------------------------------
[...]
Why does it show 0.00 in this column ?
On next listing, flat profile, I see more readable listing, but columns are
empty again:
granularity: each sample hit covers 4 byte(s) for 0.00% of 692213.00 seconds
% cumulative self self total
time seconds seconds calls ms/call ms/call name
31.7 219129.00 219129.00 ipfw_chk [1]
10.4 291179.00 72050.00 bcmp [2]
6.1 333726.00 42547.00 rn_match [3]
2.7 352177.00 18451.00 generic_bzero [4]
2.4 368960.00 16783.00 strncmp [5]
OK, I can conclude from this that I should optimize my ipfw ruleset, but
that's all. I know from sources that ipfw_chk() is a big function with a
bunch of 'case's in a large 'switch'. I want to know which parts of that
switch are executed more often. It says in listing that granularity is
4 bytes, I assume that it has a sample for each of 4-byte chunks of binary
code, so that it must have such information. My kernel is compiled with:
makeoptions DEBUG=-g
so kgdb does know where are instructions for each line of source code.
How can I obtain this info from profiling? It also would be useful to know
which places do calls to that bcmp() and rn_match().
--
WBR, Vadim Goncharov. ICQ#166852181 mailto:vadim_nuclight at mail.ru
[Moderator of RU.ANTI-ECOLOGY][FreeBSD][http://antigreen.org][LJ:/nuclight]
More information about the freebsd-performance
mailing list