FreeBSD-main-amd64-test - Build #28344 - Still Failing

From: <jenkins-admin_at_FreeBSD.org>
Date: Mon, 20 Apr 2026 22:14:38 UTC
FreeBSD-main-amd64-test - Build #28344 (1b8e5c02f5c07521129e06ff8ab7c660238fd75c) - Still Failing

Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28344/
Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28344/changes
Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28344/console

Status explanation:
"Failure" - the build is suspected being broken by the following changes
"Still Failing" - the build has not been fixed by the following changes and
                  this is a notification to note that these changes have
                  not been fully tested by the CI system

Change summaries:
(Those commits are likely but not certainly responsible)

1b8e5c02f5c07521129e06ff8ab7c660238fd75c by kevans:
amd64: fix INVLPGB range invalidation



The end of the build log:

[...truncated 4.17 MiB...]
epair0b: link state changed to DOWN
epair1a: link state changed to DOWN
epair1b: link state changed to DOWN
passed  [0.512s]
sys/netinet/output:output_raw_flowid_mpath_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair1a: Ethernet address: 58:9c:fc:10:6e:96
epair1b: Ethernet address: 58:9c:fc:10:d2:71
epair1a: link state changed to UP
epair1b: link state changed to UP
lo1: link state changed to UP
lo2: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
epair1a: link state changed to DOWN
epair1b: link state changed to DOWN
passed  [0.977s]
sys/netinet/output:output_raw_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
passed  [0.205s]
sys/netinet/output:output_tcp_flowid_mpath_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair1a: Ethernet address: 58:9c:fc:10:6e:96
epair1b: Ethernet address: 58:9c:fc:10:d2:71
epair1a: link state changed to UP
epair1b: link state changed to UP
lo1: link state changed to UP
lo2: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
epair1a: link state changed to DOWN
epair1b: link state changed to DOWN
passed  [3.566s]
sys/netinet/output:output_tcp_setup_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
passed  [0.333s]
sys/netinet/output:output_udp_flowid_mpath_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair1a: Ethernet address: 58:9c:fc:10:6e:96
epair1b: Ethernet address: 58:9c:fc:10:d2:71
epair1a: link state changed to UP
epair1b: link state changed to UP
lo1: link state changed to UP
lo2: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
epair1a: link state changed to DOWN
epair1b: link state changed to DOWN
passed  [8.989s]
sys/netinet/output:output_udp_setup_success  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
passed  [1.379s]
sys/netinet/raw:input  ->  lo0: link state changed to UP
passed  [0.074s]
sys/netinet/raw:reconnect  ->  lo0: link state changed to UP
passed  [0.060s]
sys/netinet/redirect:valid_redirect  ->  epair0a: Ethernet address: 58:9c:fc:10:6f:20
epair0b: Ethernet address: 58:9c:fc:10:13:49
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0a: promiscuous mode enabled
epair0a: promiscuous mode disabled
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [2.318s]
sys/netinet/so_reuseport_lb_test:basic_ipv4  ->  Limiting tcp reset response from 7273 to 186 packets/sec
Limiting tcp reset response from 7080 to 197 packets/sec
passed  [2.229s]
sys/netinet/so_reuseport_lb_test:basic_ipv6  ->  Limiting tcp reset response from 6642 to 188 packets/sec
Limiting tcp reset response from 6625 to 192 packets/sec
passed  [2.463s]
sys/netinet/so_reuseport_lb_test:bind_without_listen  ->  passed  [0.009s]
sys/netinet/so_reuseport_lb_test:concurrent_add  ->  passed  [2.601s]
sys/netinet/so_reuseport_lb_test:connect_bound  ->  passed  [0.007s]
sys/netinet/so_reuseport_lb_test:connect_not_bound  ->  passed  [0.007s]
sys/netinet/so_reuseport_lb_test:connect_udp  ->  passed  [0.211s]
sys/netinet/so_reuseport_lb_test:connect_udp6  ->  passed  [0.208s]
sys/netinet/so_reuseport_lb_test:double_listen_ipv4  ->  passed  [0.007s]
sys/netinet/so_reuseport_lb_test:double_listen_ipv6  ->  passed  [0.007s]
sys/netinet/socket_afinet:socket_afinet  ->  passed  [0.007s]
sys/netinet/socket_afinet:socket_afinet_bind_connected_port  ->  passed  [0.011s]
sys/netinet/socket_afinet:socket_afinet_bind_ok  ->  passed  [0.008s]
sys/netinet/socket_afinet:socket_afinet_bind_zero  ->  passed  [0.007s]
sys/netinet/socket_afinet:socket_afinet_bindany  ->  passed  [0.008s]
sys/netinet/socket_afinet:socket_afinet_multibind  ->  passed  [0.198s]
sys/netinet/socket_afinet:socket_afinet_poll_no_rdhup  ->  passed  [0.007s]
sys/netinet/socket_afinet:socket_afinet_poll_rdhup  ->  passed  [0.007s]
sys/netinet/socket_afinet:socket_afinet_stream_reconnect  ->  passed  [0.007s]
sys/netinet/tcp_connect_port_test:basic_ipv4  ->  Limiting tcp reset response from 4567 to 185 packets/sec
Limiting tcp reset response from 9181 to 214 packets/sec
Limiting tcp reset response from 9080 to 209 packets/sec
Limiting tcp reset response from 8985 to 189 packets/sec
Limiting tcp reset response from 8480 to 204 packets/sec
Limiting tcp reset response from 8934 to 209 packets/sec
Limiting tcp reset response from 9009 to 191 packets/sec
Limiting tcp reset response from 8864 to 197 packets/sec
passed  [7.274s]
sys/netinet/tcp_connect_port_test:basic_ipv6  ->  Limiting tcp reset response from 7306 to 197 packets/sec
Limiting tcp reset response from 6626 to 198 packets/sec
Limiting tcp reset response from 6825 to 193 packets/sec
Limiting tcp reset response from 7025 to 215 packets/sec
Limiting tcp reset response from 7038 to 209 packets/sec
Limiting tcp reset response from 7008 to 191 packets/sec
Limiting tcp reset response from 6971 to 202 packets/sec
Limiting tcp reset response from 6636 to 203 packets/sec
Limiting tcp reset response from 6946 to 211 packets/sec
passed  [9.372s]
TCP HPTS started 2 ((unbounded)) swi interrupt threads
sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_concurrent_operations  ->  passed  [0.561s]
sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_cpu_assignment  ->  passed  [0.558s]
sys/netinet/tcp_hpts_test.py:TestTcpHpts::test_deferred_requests  ->  Kernel page fault with the following non-sleepable locks held:
exclusive rw test-inp (test-inp) r = 0 (0xfffff80042522830) locked @ /usr/src/sys/netinet/tcp_hpts.c:1275
Kernel page fault with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c54c3c at witness_debugger+0x6c
#1 0xffffffff80c56549 at witness_warn+0x4c9
#2 0xffffffff8113cc6c at trap_pfault+0x8c
#3 0xffffffff8110c398 at calltrap+0x8
#4 0xffffffff8315f866 at __ktest_deferred_requests+0x996
#5 0xffffffff830e3abd at run_test+0x2ad
#6 0xffffffff80e6df1c at nl_receive_message+0x11c
#7 0xffffffff80e6d865 at nl_taskqueue_handler+0x3e5
#8 0xffffffff80c46912 at taskqueue_run_locked+0x1c2
#9 0xffffffff80c47803 at taskqueue_thread_loop+0xd3
#10 0xffffffff80b899d2 at fork_exit+0x82
#11 0xffffffff8110d3be at fork_trampoline+0xe


Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address	= 0xc8
fault code		= supervisor read data, page not present
instruction pointer	= 0x20:0xffffffff8316a3f1
stack pointer	        = 0:0xfffffe008c0f4ab0
frame pointer	        = 0:0xfffffe008c0f4bd0
code segment		= base 0x0, limit 0xfffff, type 0x1b
			= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags	= interrupt enabled, resume, IOPL = 0
current process		= 0 (netlink_socket (PID)
rdi: fffff800034b9e00 rsi: 0000000000000001 rdx: ffffffff822686b0
rcx: 0000000000000001  r8: ffffffff81ad2ae0  r9: 0000000000000001
rax: 0000000000000000 rbx: 0000000000002000 rbp: fffffe008c0f4bd0
r10: 0000000000000000 r11: 0000000000000001 r12: fffff80042522800
r13: 0000000000000000 r14: fffff80042522848 r15: 0000000000001388
trap number		= 12
panic: page fault
cpuid = 1
time = 1776723266
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe008c0f47e0
vpanic() at vpanic+0x136/frame 0xfffffe008c0f4910
panic() at panic+0x43/frame 0xfffffe008c0f4970
trap_pfault() at trap_pfault+0x422/frame 0xfffffe008c0f49e0
calltrap() at calltrap+0x8/frame 0xfffffe008c0f49e0
--- trap 0xc, rip = 0xffffffff8316a3f1, rsp = 0xfffffe008c0f4ab0, rbp = 0xfffffe008c0f4bd0 ---
tcp_hptsi() at tcp_hptsi+0x6f1/frame 0xfffffe008c0f4bd0
__ktest_deferred_requests() at __ktest_deferred_requests+0x996/frame 0xfffffe008c0f4c60
run_test() at run_test+0x2ad/frame 0xfffffe008c0f4d00
nl_receive_message() at nl_receive_message+0x11c/frame 0xfffffe008c0f4d40
nl_taskqueue_handler() at nl_taskqueue_handler+0x3e5/frame 0xfffffe008c0f4e40
taskqueue_run_locked() at taskqueue_run_locked+0x1c2/frame 0xfffffe008c0f4ec0
taskqueue_thread_loop() at taskqueue_thread_loop+0xd3/frame 0xfffffe008c0f4ef0
fork_exit() at fork_exit+0x82/frame 0xfffffe008c0f4f30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe008c0f4f30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
[ thread pid 0 tid 107593 ]
Stopped at      kdb_enter+0x33: movq    $0,0x15dd2e2(%rip)
db:0:kdb.enter.panic> show pcpu
cpuid        = 1
dynamic pcpu = 0xfffffe008d3bccc0
curthread    = 0xfffff8009e470780: pid 0 tid 107593 critnest 1 "netlink_socket (PID"
curpcb       = 0xfffff8009e470cd0
fpcurthread  = none
idlethread   = 0xfffff8010240c000: tid 100004 "idle: cpu1"
self         = 0xffffffff82a11000
curpmap      = 0xffffffff81da2d20
tssp         = 0xffffffff82a11384
rsp0         = 0xfffffe008c0f5000
kcr3         = 0x8000000002946002
ucr3         = 0xffffffffffffffff
scr3         = 0x34232ea0
gs32p        = 0xffffffff82a11404
ldt          = 0xffffffff82a11444
tss          = 0xffffffff82a11434
curvnet      = 0xfffff800030e7ec0
spin locks held:
db:0:kdb.enter.panic>  reset
Uptime: 50m42s
+ rc=0
+ echo 'bhyve return code = 0'
bhyve return code = 0
+ sudo /usr/sbin/bhyvectl '--vm=testvm-main-amd64-28344' --destroy
+ sh -ex freebsd-ci/scripts/test/extract-meta.sh
+ METAOUTDIR=meta-out
+ rm -fr meta-out
+ mkdir meta-out
+ tar xvf meta.tar -C meta-out
x ./
x ./run.sh
x ./run-kyua.sh
x ./disable-dtrace-tests.sh
x ./auto-shutdown
x ./disable-notyet-tests.sh
x ./disable-zfs-tests.sh
+ rm -f 'test-report.*'
+ mv 'meta-out/test-report.*' .
mv: rename meta-out/test-report.* to ./test-report.*: No such file or directory
+ report=test-report.xml
+ [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report.xml ]
+ rm -f disk-cam
+ jot 5
+ rm -f disk1
+ rm -f disk2
+ rm -f disk3
+ rm -f disk4
+ rm -f disk5
+ rm -f disk-test.img
[PostBuildScript] - [INFO] Executing post build scripts.
[FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins15069188706125549748.sh
+ ./freebsd-ci/artifact/post-link.py
Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '1b8e5c02f5c07521129e06ff8ab7c660238fd75c', 'branch': 'main', 'target': 'amd64', 'target_arch': 'amd64', 'link_type': 'latest_tested'}
"Link created: main/latest_tested/amd64/amd64 -> ../../1b8e5c02f5c07521129e06ff8ab7c660238fd75c/amd64/amd64\n"
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Checking for post-build
Performing post-build step
Checking if email needs to be generated
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Sending mail from default account using System Admin e-mail address