whats going on with the scheduler?
Andy Farkas
andyf at speednet.com.au
Mon Jul 7 21:33:06 PDT 2003
On Mon, 7 Jul 2003, Dan Nelson wrote:
> > I bet those *Giants have something to do with it...
>
> Most likely. That means they're waiting for some other process to
> release the big Giant kernel lock. Paste in top's header so we can see
> how many processes are locked, and what the system cpu percentage is.
This is what top looks like (up to the 1st 0.00% process) when sitting
idle* with 3 setiathomes:
last pid: 50290; load averages: 3.02, 3.07, 3.06 up 8+23:24:11 14:00:47
97 processes: 9 running, 71 sleeping, 4 zombie, 12 waiting, 1 lock
CPU states: 4.0% user, 72.0% nice, 4.6% system, 0.7% interrupt, 18.8% idle
Mem: 142M Active, 220M Inact, 116M Wired, 19M Cache, 61M Buf, 1916K Free
Swap: 64M Total, 128K Used, 64M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND
42946 setiathome 139 15 16552K 15984K RUN 0 43.8H 98.00% 98.00% setiathome
42945 setiathome 139 15 16944K 15732K CPU1 1 43.0H 97.56% 97.56% setiathome
42947 setiathome 139 15 15524K 14956K CPU0 2 42.9H 94.14% 94.14% setiathome
14 root -16 0 0K 12K RUN 0 144.7H 21.97% 21.97% idle: cpu0
12 root -16 0 0K 12K RUN 2 153.5H 19.87% 19.87% idle: cpu2
11 root -16 0 0K 12K RUN 3 153.6H 18.60% 18.60% idle: cpu3
13 root -16 0 0K 12K RUN 1 150.2H 17.29% 17.29% idle: cpu1
50090 root 111 0 11884K 11084K CPU3 3 4:22 11.57% 11.57% cdparanoia
12571 andyf 100 0 20488K 19308K select 1 509:56 4.00% 4.00% XFree86
17629 andyf 97 0 2676K 1624K select 3 244:57 1.03% 1.03% xdaliclock
16 root -48 -167 0K 12K *Giant 1 122:57 0.39% 0.39% swi7: tty:sio clock
38 root 20 0 0K 12K syncer 0 101:47 0.00% 0.00% syncer
*I'm running an X desktop and right now I'm ripping a cd but as you can
see its not doing much else..
Note how the seti procs are getting 94-98% cpu time.
When I do my scp thing, top looks like this:
last pid: 50322; load averages: 1.99, 2.82, 2.98 up 8+23:39:09 14:15:45
98 processes: 8 running, 71 sleeping, 4 zombie, 12 waiting, 3 lock
CPU states: 1.7% user, 33.7% nice, 20.1% system, 0.6% interrupt, 43.9% idle
Mem: 135M Active, 224M Inact, 120M Wired, 19M Cache, 61M Buf, 1424K Free
Swap: 64M Total, 128K Used, 64M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND
42946 setiathome 139 15 16552K 15984K CPU3 2 44.0H 68.41% 68.41% setiathome
50296 andyf 125 0 3084K 2176K RUN 2 7:55 64.21% 64.21% ssh
12 root -16 0 0K 12K CPU2 2 153.6H 48.78% 48.78% idle: cpu2
11 root -16 0 0K 12K CPU3 3 153.6H 48.63% 48.63% idle: cpu3
13 root -16 0 0K 12K RUN 1 150.2H 48.44% 48.44% idle: cpu1
14 root -16 0 0K 12K RUN 0 144.8H 45.31% 45.31% idle: cpu0
42947 setiathome 130 15 15524K 14956K RUN 2 43.1H 28.56% 28.56% setiathome
42945 setiathome 125 15 15916K 14700K RUN 0 43.2H 25.05% 25.05% setiathome
50090 root -8 0 5636K 4832K cbwait 3 5:21 2.69% 2.69% cdparanoia
12571 andyf 97 0 20488K 19308K select 1 510:43 2.39% 2.39% XFree86
16 root -48 -167 0K 12K *Giant 0 123:11 0.98% 0.98% swi7: tty:sio clock
17629 andyf 97 0 2676K 1624K *Giant 0 245:18 0.93% 0.93% xdaliclock
38 root 20 0 0K 12K syncer 1 101:54 0.20% 0.20% syncer
50295 andyf 8 0 2528K 1256K nanslp 0 0:03 0.05% 0.05% scp
28905 root 8 0 0K 12K nfsidl 0 93:02 0.00% 0.00% nfsiod 0
Notice how 'nice' has gone to 33.7% and 'idle' to 43.9%, and the seti
procs have dropped well below 94%.
> A truss of one of the seti processes may be useful too. setiathome
> really shouldn't be doing many syscalls at all.
If setiathome is making lots of syscalls, then running the 3 instanses
should already show a problem, no?
--
:{ andyf at speednet.com.au
Andy Farkas
System Administrator
Speednet Communications
http://www.speednet.com.au/
More information about the freebsd-current
mailing list