9.3 Process Averages
Ian Smith
smithi at nimnet.asn.au
Sun Oct 19 11:24:10 UTC 2014
In freebsd-questions Digest, Vol 541, Issue 6, Message: 1
On Fri, 17 Oct 2014 08:25:30 -0400 Grant Peel <gpeel at thenetnow.com> wrote:
[reformatted a tad or two]
> I have recently built FreeBSD 9.3 (i386) from the ground up (making our
> next gen server build).
>
> Once complete I have been allowing it to run under no load, and have noticed
> that the load averages Hovering around 0.40 - 0.50 (see below).
>
> I have TERM'd and killed just about everything and the usage remains. I
> have never seen any other server do
> This with any other build. Is there something new in 9.x that might be
> causing this? I have servers running 8.0
> With lots of software running that is not this high.
>
> I am not at all concerned about the hardware as it was an active server
> with no issues, and the system compiled without any issues.
> root at spare:/usr/local/etc/rc.d # top -Sa -s10
> last pid: 3715; load averages: 0.43, 0.44, 0.43 up 0+08:55:07 08:14:46
>
> 33 processes: 2 running, 30 sleeping, 1 waiting
> CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
> Mem: 12M Active, 309M Inact, 104M Wired, 88M Buf, 560M Free
> Swap: 4096M Total, 4096M Free
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 11 root 2 155 ki31 0K 16K RUN 1 17.8H 200.00% [idle]
> 12 root 17 -84 - 0K 136K WAIT 0 0:42 0.00% [intr]
> 8 root 1 16 - 0K 8K syncer 1 0:21 0.00% [syncer]
> 13 root 3 -8 - 0K 24K - 0 0:07 0.00% [geom]
Hi Grant,
etc, yes nothing happening for a 0.43 load average; similar to what I've
noticed on 9.1-R through 9.3-PRE. 9.1 was on a P3-M 1133 single core; I
didn't record load avgs but noticed they seemed high for an idle system
(without X running), when my much busier 8.2 workstation (same model)
near idling typically showing in the order of:
last pid: 69003; load averages: 0.04, 0.09, 0.08 up 19+19:58:09
148 processes: 2 running, 130 sleeping, 16 waiting
CPU: 5.4% user, 0.0% nice, 4.5% system, 8.3% interrupt, 81.8% idle
Mem: 309M Active, 244M Inact, 151M Wired, 22M Cache, 85M Buf, 10M Free
Swap: 2048M Total, 142M Used, 1906M Free, 6% Inuse
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
11 root 1 171 ki31 0K 8K RUN 73.7H 100.00% idle
35770 smithi 1 50 0 35868K 10228K select 294:31 2.69% gkrellm
12 root 16 -60 - 0K 128K WAIT 885:01 0.00% intr
1341 smithi 1 44 0 194M 151M select 552:35 0.00% Xorg
1433 smithi 3 44 0 18812K 4964K select 254:40 0.00% xmms
1423 smithi 1 45 0 30424K 3864K select 249:16 0.00% kdeinit
54051 smithi 5 59 0 230M 201M ucond 92:26 0.00% seamonkey-bin
1411 smithi 1 44 0 37620K 15896K select 55:44 0.00% kdeinit
21 root 1 44 - 0K 8K syncer 29:31 0.00% syncer
1429 smithi 1 44 0 32208K 4884K select 24:33 0.00% kdeinit
1399 smithi 1 44 0 32632K 7128K select 17:15 0.00% kdeinit
1418 smithi 1 44 0 29424K 9200K select 13:45 0.00% artsd
35841 smithi 1 44 0 3652K 880K nanslp 7:32 0.00% ephem
1111 root 1 44 0 3456K 400K select 5:14 0.00% moused
1009 root 1 44 0 3352K 360K select 2:40 0.00% powerd
Whereas my 9.3-PREREL X200 laptop (c2duo 2.4GHz 2GiB) usually shows
above 0.5 and even up to 0.7 for extended periods doing, well, very
little except idling X & KDE4 and such and running top over ssh. viz:
last pid: 96312; load averages: 0.64, 0.62, 0.59 up 103+18:41:15
87 processes: 2 running, 84 sleeping, 1 waiting
CPU: 0.6% user, 0.0% nice, 3.4% system, 0.0% interrupt, 96.0% idle
Mem: 469M Active, 785M Inact, 463M Wired, 9680K Cache, 207M Buf, 135M Free
Swap: 2048M Total, 153M Used, 1895M Free, 7% Inuse
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
11 root 2 155 ki31 0K 32K RUN 1 1042.5 200.00% idle
57804 smithi 1 26 0 163M 13156K select 0 128.0H 5.27% gkrellm
1428 smithi 1 20 0 567M 372M select 1 18.1H 0.20% Xorg
1513 smithi 3 20 0 504M 42084K kqread 0 713:04 0.00% kdeinit4
9 root 1 16 - 0K 16K syncer 0 506:59 0.00% syncer
12 root 22 -84 - 0K 352K WAIT 1 360:58 0.00% intr
10484 smithi 2 22 0 488M 41492K select 0 284:20 0.00% kdeinit4
1515 smithi 2 52 0 468M 30628K select 0 196:17 0.00% kdeinit4
1511 smithi 3 20 0 540M 45824K select 0 74:37 0.00% kdeinit4
1475 smithi 3 20 0 481M 29584K select 0 62:31 0.00% kdeinit4
26060 root 2 20 0 52228K 2872K select 1 46:11 0.00% upowerd
1259 haldaemon 2 20 0 57488K 3736K select 0 38:43 0.00% hald
16 root 1 -16 - 0K 16K tzpoll 0 20:59 0.00% acpi_thermal
43659 smithi 4 43 0 790M 172M select 0 12:45 0.00% plasma-desktop
894 messagebus 1 20 0 14348K 1496K select 0 12:05 0.00% dbus-daemon
1488 smithi 4 20 0 530M 67128K kqread 0 11:32 0.00% kwin
853 root 1 20 0 12092K 800K select 1 8:33 0.00% powerd
15 root 32 -68 - 0K 512K - 0 8:26 0.00% usb
3952 root 1 20 0 22264K 1088K select 1 8:12 0.00% ntpd
etc, maybe 4% of ea CPU busy, gkrellm and resultant Xorg work about it.
And an ancient 5.5-STABLE <cough> firewall, nat plus various servers:
last pid: 41572; load averages: 0.06, 0.02, 0.00 up 968+01:32:14
118 processes: 3 running, 92 sleeping, 23 waiting
CPU states: 0.4% user, 0.0% nice, 1.0% system, 0.0% interrupt, 98.6% idle
Mem: 68M Active, 24M Inact, 46M Wired, 7564K Cache, 25M Buf, 2000K Free
Swap: 384M Total, 39M Used, 345M Free, 10% Inuse
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
11 root 171 52 0K 8K RUN ??? 97.27% 97.27% idle
27 root -28 -147 0K 8K RUN 115.1H 0.00% 0.00% swi5: clock sio
49485 root 96 0 18724K 17240K select 83.0H 0.00% 0.00% natd
5704 mysql 96 0 44648K 1296K lthr 61.0H 0.00% 0.00% mysqld
5747 root 96 0 1368K 296K select 54.2H 0.00% 0.00% moused
29 root -44 -163 0K 8K WAIT 32.4H 0.00% 0.00% swi1: net
22 root -80 -199 0K 8K WAIT 931:10 0.00% 0.00% irq11: cbb0 cbb1+++
44 root 20 0 0K 8K syncer 823:30 0.00% 0.00% syncer
41 root 171 52 0K 8K pgzero 585:33 0.00% 0.00% pagezero
81196 root 96 0 4448K 792K select 555:23 0.00% 0.00% mpd4
86908 root 96 0 3248K 784K select 172:12 0.00% 0.00% ntpd
3118 bind 96 0 8860K 3980K select 159:17 0.00% 0.00% named
2886 root 96 0 18308K 3520K select 103:28 0.00% 0.00% httpd
57748 root 96 0 3828K 960K select 70:53 0.00% 0.00% sendmail
2721 root 76 -20 1392K 144K select 62:49 0.00% 0.00% apmd
What top shows does reflect sysctl vm.loadavg on each of the above, so I
really don't understand what 'load average' is meant to mean any more?
cheers, Ian
More information about the freebsd-questions
mailing list