vf1 kernel pid 609 (ntpd), jid 0, uid 0, was killed; out of swap space

David Christensen dpchrist at holgerdanske.com
Thu Oct 29 02:05:58 UTC 2020


freebsd-questions:

I have a laptop with Debian and VirtualBox:

2020-10-28 18:16:58 dpchrist at tinkywinky ~
$ cat /etc/debian_version ; uname -a ; VirtualBoxVM -? | head -n 1
9.13
Linux tinkywinky 4.9.0-13-amd64 #1 SMP Debian 4.9.228-1 (2020-07-05) 
x86_64 GNU/Linux
Oracle VM VirtualBox VM Runner v6.1.16


I have created a FreeBSD virtual machine 'vf1':

2020-10-28 18:14:27 toor at vf1 ~
# freebsd-version ; uname -a
12.1-RELEASE-p10
FreeBSD vf1.tracy.holgerdanske.com 12.1-RELEASE-p10 FreeBSD 
12.1-RELEASE-p10 GENERIC  amd64



I wrote a Perl library (distribution) that includes functions for 
invoking zfs(8) via sh(1) and ssh(1) (e.g. Perl 'system').  I am 
currently stress testing the library by running the test suite one 
hundred times in four terminals concurrently:

2020-10-28 17:36:43 toor at vf1 ~/src/perl/Dpchrist-Perl-b0_01068002
# time for i in $( seq 1 100 ) ; do make test ; done


After ~40 minutes, I see the following on the virtual machine console:

Oct 28 15:15:43 vf1 kernel pid 609 (ntpd), jid 0, uid 0, was killed; out 
of swap space
Oct 28 15:15:56 vf1 kernel pid 763 (bash), jid 0, uid 0, was killed; out 
of swap space
Oct 28 15:16:28 vf1 kernel pid 1727 (bash), jid 0, uid 0, was killed; 
out of swap space


All four tests are still running.  I see no new console messages after 
~90 minutes.


I am trying to figure out the console warnings:

2020-10-28 18:49:27 toor at vf1 ~
# top -w -d 1 | head -n 20
last pid: 56207;  load averages:  3.80,  3.96,  3.94  up 0+04:22:00 
18:49:31
51 processes:  2 running, 49 sleeping
CPU: 19.7% user,  0.0% nice, 22.6% system,  1.9% interrupt, 55.8% idle
Mem: 47M Active, 5084K Inact, 5200K Laundry, 876M Wired, 21M Free
ARC: 765M Total, 164M MFU, 578M MRU, 374K Anon, 3092K Header, 21M Other
      38M Compressed, 706M Uncompressed, 18.73:1 Ratio
Swap: 1024M Total, 128M Used, 896M Free, 12% Inuse

   PID USERNAME    THR PRI NICE   SIZE    RES SWAP STATE    C   TIME 
WCPU COMMAND
56196 root          1  45    0    34M    21M    0 swread   0   0:00 
5.96% perl
56174 root          1  52    0    43M    18M    0 RUN      1   0:01 
3.96% perl
56139 root          1  48    0    34M  7296K    0 swread   0   0:01 
1.95% perl
56172 root          1  48    0    28M  3872K    0 wait     1   0:00 
1.95% perl
54980 root          1  43    0    25M  5512K    0 swread   0   0:04 
0.98% perl
56159 root          1  26    0    24M  3192K    0 select   0   0:00 
0.98% perl
  1730 root          1  20    0    20M   872K    0 select   0   0:08 
0.00% sshd
   651 root          1  20    0    17M   776K    0 select   0   0:06 
0.00% sendmail
54786 root          1  26    0    25M  3472K    0 select   0   0:04 
0.00% perl
54841 root          1  28    0    25M  3256K    0 select   0   0:04 
0.00% perl
  4834 root          1  20    0    20M  1060K    0 select   0   0:04 
0.00% sshd


2020-10-28 18:51:23 toor at vf1 ~
# top -m io -w -d 1 | head -n 20
last pid: 56836;  load averages:  2.82,  3.51,  3.75  up 0+04:24:54 
18:52:25
52 processes:  6 running, 46 sleeping
CPU: 19.6% user,  0.0% nice, 22.9% system,  1.9% interrupt, 55.5% idle
Mem: 35M Active, 260K Inact, 12M Laundry, 888M Wired, 20M Free
ARC: 785M Total, 163M MFU, 598M MRU, 153K Anon, 3166K Header, 21M Other
      38M Compressed, 724M Uncompressed, 18.90:1 Ratio
Swap: 1024M Total, 169M Used, 855M Free, 16% Inuse

   PID USERNAME     VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
56830 root        4701   3573    327      1   4491   4819   0.82% perl
56821 root       12346   6721    343      1  11970  12314   2.10% perl
56828 root        5604   5668    355      1   5256   5612   0.96% perl
  1730 root      171818  11251     31      0 158749 158780  27.04% sshd
   651 root       86749   2265    297      1 108232 108530  18.48% sendmail
54786 root       62254  11017    288      0  38110  38398   6.54% perl
54841 root       60848  12039    121      0  37260  37381   6.37% perl
54980 root       58770  10243    212      0  35617  35829   6.10% perl
  4834 root       40981   2325    115      0   4454   4569   0.78% sshd
  4838 root       41210   2389     30      0   5984   6014   1.02% sshd
  4748 root       40306   2325    108      0   5319   5427   0.92% sshd

2020-10-28 18:57:52 toor at vf1 ~
# swapctl -l
Device:       1024-blocks     Used:
/dev/mirror/swap   1048572    136220


The pool 'vf1zpool1' is the target of the zfs(8) commands:

2020-10-28 18:57:58 toor at vf1 ~
# zpool list vf1zpool1
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP 
HEALTH  ALTROOT
vf1zpool1   960M  99.1M   861M        -         -    46%    10%  1.00x 
ONLINE  -

2020-10-28 19:00:33 toor at vf1 ~
# zpool iostat -v vf1zpool1
                     capacity     operations    bandwidth
pool             alloc   free   read  write   read  write
---------------  -----  -----  -----  -----  -----  -----
vf1zpool1        99.1M   861M      2     44  21.1K   518K
   gpt/vf1zpool1  99.1M   861M      2     44  21.1K   518K
---------------  -----  -----  -----  -----  -----  -----


Comments or suggestions?


David


More information about the freebsd-questions mailing list