System Cpu Between 50-70% and need to find out why

Paul bsdlist at cogeco.ca
Mon Apr 23 15:27:05 UTC 2007


Hi Steve,

When I shut off everything except apache with very low load and 
qpopper it is still maxed out.

Here is another look with the vm setting off.

I don't always see the idle: cpu in the list as it changes 
constantly. I have a hunch this is disk related but I am not sure. I 
include two snapshots below

Thanks,

Paul

last pid: 18967;  load averages: 29.00, 44.28, 
46.39                                                     up 
0+10:16:42  11:17:00
308 processes: 41 running, 239 sleeping, 7 zombie, 21 waiting
CPU states: 13.3% user,  0.0% nice, 74.5% system, 12.2% interrupt,  0.0% idle
Mem: 1204M Active, 5678M Inact, 381M Wired, 20K Cache, 214M Buf, 8398M Free
Swap: 8192M Total, 8192M Free

   PID USERNAME       THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
    11 root             1 171   52     0K    16K RUN    2  93:59 
22.80% idle: cpu2
    10 root             1 171   52     0K    16K RUN    3 116:20 
22.46% idle: cpu3
    12 root             1 171   52     0K    16K RUN    1 101:36 
22.36% idle: cpu1
    13 root             1 171   52     0K    16K RUN    0 123:32 
22.22% idle: cpu0
18944 root             1 132    0 15944K  5332K RUN    1   0:01 
14.99% perl5.8.8
18922 root             1 131    0 15648K  3196K RUN    2   0:01 12.45% imapd
18640 root             1 127    0  7484K  2792K CPU1   0   0:10 11.86% top
18952 root             1 131    0 10536K  1412K RUN    1   0:00 11.30% qpopper
18894 user1            1 127    0 10740K  2948K CPU3   3   0:02 10.21% qpopper
18845 user2            1  -4    0 10740K  2944K getblk 0   0:04 10.14% qpopper
18871 user3            1 126    0 10740K  3008K CPU0   0   0:03  9.48% qpopper
18920 root             1 129    0 15648K  3196K RUN    3   0:01  8.68% imapd
17491 user4            1 123    0  7864K  3168K select 2   0:44  8.54% top
    14 root             1 -32 -151     0K    16K 
WAIT   0  59:32  7.52% swi4: clock sio
18939 root             1 130    0 10740K  2940K RUN    2   0:01  7.10% qpopper
18923 user5            1 128    0 10740K  2948K RUN    2   0:01  7.03% qpopper
    48 root             1  -4    0     0K    16K ufs    2  28:03  6.74% syncer
18953 root             1 130    0 10544K  2140K RUN    0   0:00  6.46% qpopper
18935 root             1 130    0 10740K  2944K RUN    2   0:01  6.21% qpopper
18941 user6            1 130    0 10740K  3008K RUN    2   0:01  6.07% qpopper
18956 root             1 131    0  6084K   860K RUN    3   0:00  5.95% qpopper
    16 root             1 -44 -163     0K    16K 
WAIT   0  52:09  5.71% swi1: net
18940 user7            1 129    0 10740K  2944K RUN    0   0:00  5.62% qpopper
18934 root             1 130    0 10740K  2940K RUN    1   0:00  5.47% qpopper
18954 root             1 130    0 10532K  2104K RUN    0   0:00  5.38% qpopper
18949 root             1 130    0 10576K  1424K RUN    0   0:00  5.07% qpopper
18965 root             1 132    0  5844K  1536K RUN    1   0:00  5.00% inetd


last pid: 20588;  load averages: 47.61, 36.13, 
39.78                                                     up 
0+10:24:00  11:24:18
531 processes: 93 running, 413 sleeping, 19 zombie, 6 lock
CPU states: 19.1% user,  0.0% nice, 74.8% system,  6.1% interrupt,  0.0% idle
Mem: 1590M Active, 5795M Inact, 404M Wired, 20K Cache, 214M Buf, 7872M Free
Swap: 8192M Total, 8192M Free

   PID USERNAME       THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
  1375 vscan            3  98    0 65632K 52772K ucond  0  52:18 17.63% clamd
  1184 bind             1 125    0 63620K 60408K select 2  60:11  9.42% named
19776 root             1 126    0  8096K  3408K CPU2   0   0:23  7.57% top
20429 user1            1 127    0 10740K  3008K RUN    0   0:03  6.98% qpopper
20424 user2            1 127    0 10740K  3008K RUN    1   0:03  6.81% qpopper
20395 user3            1 127    0 10740K  2944K RUN    2   0:03  6.81% qpopper
20442 user4            1 127    0 10740K  2944K RUN    0   0:03  6.75% qpopper
17491 user5            1 118    0  8184K  3488K RUN    0   1:08  6.74% top
20391 user6            1 127    0 10768K  2972K RUN    2   0:03  6.59% qpopper
19135 vscan            1 128    0   126M 85504K select 
2   0:20  5.22% perl5.8.8
19136 vscan            1 127    0   124M 83900K 
RUN    2   0:20  4.69% perl5.8.8
20476 root             1 127    0 15644K  3188K RUN    3   0:01  4.40% imapd
20478 user7            1 127    0 15648K  3232K RUN    3   0:01  4.40% imapd
20475 root             1 127    0 10024K  4284K RUN    2   0:01  4.33% sendmail
20139 user8            1 127    0 15724K  3288K RUN    2   0:05  4.07% imapd
20499 user9            1 127    0 10740K  2944K RUN    0   0:01  4.00% qpopper
19134 vscan            1 125    0   127M 86256K select 
3   0:21  3.91% perl5.8.8
20304 user10           1   4    0 10740K  2948K sbwait 2   0:04  3.86% qpopper
19133 vscan            1  -4    0   123M 83372K 
RUN    2   0:20  3.86% perl5.8.8




At 11:07 AM 23/04/2007, you wrote:
>But when disabled does the stats in top show a different picture which
>might identify the app / component which is causing so much vm work?
>
>    Steve



More information about the freebsd-smp mailing list