whats going on with the scheduler?

Lanny Baron lnb at FreeBSDsystems.COM
Mon Jul 7 17:43:09 PDT 2003


A load of 3 is pretty high. I think you have more going on.

On one of our iNET Servers in Texas that does mail for several thousand
people along with shells, radius etc. ......

last pid: 97534;  load averages:  0.07,  0.03,  0.01                  
up 55+21:34:34  19:40:48
200 processes: 2 running, 198 sleeping
CPU states:  0.0% user,  0.0% nice,  0.2% system,  0.2% interrupt, 99.6%
idle
Mem: 149M Active, 1513M Inact, 257M Wired, 76M Cache, 199M Buf, 8236K
Free
Swap: 750M Total, 750M Free

  PID USERNAME      PRI NICE  SIZE    RES STATE  C   TIME   WCPU    CPU
COMMAND
97525 lnb            28   0  2180K  1476K CPU1   0   0:00  0.69%  0.29%
top
  329 root            2   0  2904K  1508K select 0  31:21  0.00%  0.00%
smbd
  313 root            2   0 10420K  9660K select 1  26:39  0.00%  0.00%
radiusd
  314 root            2   0 10412K  9624K select 1  24:25  0.00%  0.00%
radiusd
  305 qmails          2   0  1056K   632K select 0  19:09  0.00%  0.00%
qmail-s
 1497 smbd            2   0  3308K  2732K select 0  16:39  0.00%  0.00%
eggdrop

Lanny

On Mon, 2003-07-07 at 19:33, Andy Farkas wrote:
> FreeBSD 5.1-RELEASE with SCHED_4BSD on a quad ppro 200 (dell 6100/200).
> 
> Last night I started 3 setiathome's then went to bed. The system was
> otherwise idle and had a load of 3.00,  3.00,  3.00.
> 
> This morning, I wanted to copy a (large) file from a remote server, so I
> did a:
> 
>  scp -c blowfish -p -l 100 remote.host:filename .
> 
> which is running in another window (and will run for 3 more hours).
> 
> And now, on my otherwise idle system, the load is varying from less than
> 2.00 (!) to just over 3.00, with an average average of about 2.50.
> 
> Here is some output from top:
> 
>   PID USERNAME   PRI NICE   SIZE    RES STATE  C   TIME   WCPU    CPU COMMAND
> 42946 setiathome 139   15 15524K 14952K *Giant 0  39.9H 89.26% 89.26% setiathome
> 49332 andyf      130    0  3084K  2176K *Giant 2  81:49 67.68% 67.68% ssh
>    12 root       -16    0     0K    12K CPU2   2 152.1H 49.12% 49.12% idle: cpu2
>    13 root       -16    0     0K    12K CPU1   1 148.7H 44.58% 44.58% idle: cpu1
>    11 root       -16    0     0K    12K RUN    3 152.1H 44.14% 44.14% idle: cpu3
>    14 root       -16    0     0K    12K CPU0   0 143.3H 41.65% 41.65% idle: cpu0
> 42945 setiathome 129   15 15916K 14700K *Giant 2  39.0H 25.20% 25.20% setiathome
> 42947 setiathome 129   15 15524K 14956K *Giant 1  40.3H 22.61% 22.61% setiathome
> 
> So, can someone explain why the seti procs are not getting 100% cpu like
> they were before the scp(ssh) started and why there is so much idle time?
> I bet those *Giants have something to do with it...
> 
> --
> 
>  :{ andyf at speednet.com.au
> 
>         Andy Farkas
>     System Administrator
>    Speednet Communications
>  http://www.speednet.com.au/
> 
> 
> 
> _______________________________________________
> freebsd-smp at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-smp
> To unsubscribe, send any mail to "freebsd-smp-unsubscribe at freebsd.org"
-- 
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
Lanny Baron
Proud to be 100% FreeBSD
http://www.FreeBSDsystems.COM
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=



More information about the freebsd-current mailing list