vf1 kernel pid 609 (ntpd), jid 0, uid 0, was killed; out of swap space
David Christensen
dpchrist at holgerdanske.com
Thu Oct 29 03:07:31 UTC 2020
On 2020-10-28 19:05, David Christensen wrote:
> 2020-10-28 17:36:43 toor at vf1 ~/src/perl/Dpchrist-Perl-b0_01068002
> # time for i in $( seq 1 100 ) ; do make test ; done
After ~140 minutes, the tests are still running, but visibly slower.
Logging in via SSH is sluggish. There no new console error messages.
Status is as follows:
2020-10-28 19:55:13 toor at vf1 ~
# swapctl -l
Device: 1024-blocks Used:
/dev/mirror/swap 1048572 139504
2020-10-28 19:55:58 toor at vf1 ~
# top -w -d 1 | head -n 20
last pid: 60001; load averages: 1.10, 1.10, 1.32 up 0+05:28:53
19:56:24
48 processes: 2 running, 46 sleeping
CPU: 17.3% user, 0.0% nice, 24.7% system, 2.7% interrupt, 55.4% idle
Mem: 22M Active, 1360K Inact, 1136K Laundry, 926M Wired, 4788K Free
ARC: 823M Total, 163M MFU, 634M MRU, 153K Anon, 3289K Header, 22M Other
40M Compressed, 760M Uncompressed, 18.97:1 Ratio
Swap: 1024M Total, 168M Used, 856M Free, 16% Inuse
PID USERNAME THR PRI NICE SIZE RES SWAP STATE C TIME
WCPU COMMAND
59998 root 1 22 0 32M 6568K 0 swread 1 0:01
2.98% perl
59999 root 1 25 0 33M 9600K 0 CPU1 1 0:01
2.98% perl
59995 root 1 20 0 42M 6776K 0 swread 1 0:01
0.98% perl
58505 root 1 20 0 25M 1492K 0 select 1 0:11
0.00% perl
1730 root 1 20 0 20M 260K 0 select 1 0:09
0.00% sshd
651 root 1 20 0 17M 276K 0 select 0 0:08
0.00% sendmail
59152 root 1 20 0 25M 1360K 0 select 0 0:07
0.00% perl
4838 root 1 20 0 20M 204K 0 select 0 0:05
0.00% sshd
4834 root 1 20 0 20M 172K 0 select 0 0:05
0.00% sshd
4748 root 1 20 0 20M 148K 0 select 1 0:04
0.00% sshd
5213 root 1 20 0 13M 312K 0 select 0 0:04
0.00% top
2020-10-28 19:56:24 toor at vf1 ~
# top -m io -w -d 1 | head -n 20
last pid: 60004; load averages: 0.93, 1.06, 1.30 up 0+05:28:59
19:56:30
48 processes: 3 running, 45 sleeping
CPU: 17.3% user, 0.0% nice, 24.7% system, 2.7% interrupt, 55.4% idle
Mem: 19M Active, 1572K Inact, 1692K Laundry, 926M Wired, 6920K Free
ARC: 823M Total, 163M MFU, 634M MRU, 128K Anon, 3288K Header, 22M Other
40M Compressed, 760M Uncompressed, 18.97:1 Ratio
Swap: 1024M Total, 151M Used, 873M Free, 14% Inuse
PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND
59999 root 3162 1053 395 2 2952 3349 0.26% perl
59995 root 9057 1779 401 1 8750 9152 0.70% perl
59964 root 9645 1742 375 3 9341 9719 0.75% perl
60002 root 152 245 134 0 158 292 0.02% perl
58505 root 218567 26744 281 0 202628 202909 15.57% perl
1730 root 192785 13715 38 0 178942 178980 13.74% sshd
651 root 124736 6183 1050 1 153814 154865 11.89% sendmail
59152 root 143127 18210 253 0 132596 132849 10.20% perl
4838 root 68504 5564 39 0 32530 32569 2.50% sshd
4834 root 66724 5295 126 0 29236 29362 2.25% sshd
4748 root 60897 4640 113 0 25122 25235 1.94% sshd
2020-10-28 19:56:30 toor at vf1 ~
# zpool list vf1zpool1
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
HEALTH ALTROOT
vf1zpool1 960M 103M 857M - - 47% 10% 1.00x
ONLINE -
2020-10-28 19:56:52 toor at vf1 ~
# zpool iostat -v vf1zpool1
capacity operations bandwidth
pool alloc free read write read write
--------------- ----- ----- ----- ----- ----- -----
vf1zpool1 103M 857M 2 37 17.4K 430K
gpt/vf1zpool1 103M 857M 2 37 17.4K 430K
--------------- ----- ----- ----- ----- ----- -----
sendmail(8) seems to be using more I/O than everything else. How do I
determine what I/O sendmail(8) is doing, and why?
The top(1) load averages are now much lower (1.10 vs. 3.80). How do I
determine why?
David
More information about the freebsd-questions
mailing list