FreeBSD-main-amd64-test - Build #28019 - Still Failing

From: <jenkins-admin_at_FreeBSD.org>
Date: Wed, 11 Mar 2026 18:41:51 UTC
FreeBSD-main-amd64-test - Build #28019 (738aea3387d831c95024fd28076dadde132ceaec) - Still Failing

Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28019/
Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28019/changes
Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/28019/console

Status explanation:
"Failure" - the build is suspected being broken by the following changes
"Still Failing" - the build has not been fixed by the following changes and
                  this is a notification to note that these changes have
                  not been fully tested by the CI system

Change summaries:
(Those commits are likely but not certainly responsible)

e71bfbe2f58ffff8f16a9da075d98fff41671bac by emaste:
llvm-*: Use SYMLINKS for unprefixed LLVM binutils

9da4a804f0916b24519b8baa7ed460a7ba23d8c8 by kib:
sigreturn.2: refresh the man page

738aea3387d831c95024fd28076dadde132ceaec by salvadore:
Calendars: Update status reports deadlines



The end of the build log:

[...truncated 4.51 MiB...]
#10 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#11 0xffffffff80bda8d5 at sys___sysctl+0x65
#12 0xffffffff8112bd71 at amd64_syscall+0x451
#13 0xffffffff810f9f3b at fast_syscall_common+0xf8
uma_zalloc_debug: zone "MAP ENTRY" with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80f999c4 at uma_zalloc_debug+0x34
#3 0xffffffff80f99517 at uma_zalloc_arg+0x27
#4 0xffffffff80fb3811 at vm_map_entry_clone+0x1a1
#5 0xffffffff80fad7cb at vm_map_clip_end+0xcb
#6 0xffffffff80faf602 at vm_map_wire_locked+0x142
#7 0xffffffff80faf467 at vm_map_wire+0x67
#8 0xffffffff80fa6665 at vslock+0x75
#9 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#10 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#11 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#12 0xffffffff80bda3bf at sysctl_root+0x22f
#13 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#14 0xffffffff80bda8d5 at sys___sysctl+0x65
#15 0xffffffff8112bd71 at amd64_syscall+0x451
#16 0xffffffff810f9f3b at fast_syscall_common+0xf8
vm_map_clip_start: map 0xfffff800b6311490 entry 0xfffff8014aa894e0 start 0x1ca0a5624000 with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80fad642 at vm_map_clip_start+0x42
#3 0xffffffff80faf5ec at vm_map_wire_locked+0x12c
#4 0xffffffff80faf467 at vm_map_wire+0x67
#5 0xffffffff80fa6665 at vslock+0x75
#6 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#7 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#8 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#9 0xffffffff80bda3bf at sysctl_root+0x22f
#10 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#11 0xffffffff80bda8d5 at sys___sysctl+0x65
#12 0xffffffff8112bd71 at amd64_syscall+0x451
#13 0xffffffff810f9f3b at fast_syscall_common+0xf8
uma_zalloc_debug: zone "MAP ENTRY" with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80f999c4 at uma_zalloc_debug+0x34
#3 0xffffffff80f99517 at uma_zalloc_arg+0x27
#4 0xffffffff80fb3811 at vm_map_entry_clone+0x1a1
#5 0xffffffff80fad6cb at vm_map_clip_start+0xcb
#6 0xffffffff80faf5ec at vm_map_wire_locked+0x12c
#7 0xffffffff80faf467 at vm_map_wire+0x67
#8 0xffffffff80fa6665 at vslock+0x75
#9 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#10 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#11 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#12 0xffffffff80bda3bf at sysctl_root+0x22f
#13 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#14 0xffffffff80bda8d5 at sys___sysctl+0x65
#15 0xffffffff8112bd71 at amd64_syscall+0x451
#16 0xffffffff810f9f3b at fast_syscall_common+0xf8
vm_map_clip_end: map 0xfffff800b6311490 entry 0xfffff8014aa894e0 end 0x1ca0a5625000 with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80fad742 at vm_map_clip_end+0x42
#3 0xffffffff80faf602 at vm_map_wire_locked+0x142
#4 0xffffffff80faf467 at vm_map_wire+0x67
#5 0xffffffff80fa6665 at vslock+0x75
#6 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#7 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#8 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#9 0xffffffff80bda3bf at sysctl_root+0x22f
#10 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#11 0xffffffff80bda8d5 at sys___sysctl+0x65
#12 0xffffffff8112bd71 at amd64_syscall+0x451
#13 0xffffffff810f9f3b at fast_syscall_common+0xf8
uma_zalloc_debug: zone "MAP ENTRY" with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80f999c4 at uma_zalloc_debug+0x34
#3 0xffffffff80f99517 at uma_zalloc_arg+0x27
#4 0xffffffff80fb3811 at vm_map_entry_clone+0x1a1
#5 0xffffffff80fad7cb at vm_map_clip_end+0xcb
#6 0xffffffff80faf602 at vm_map_wire_locked+0x142
#7 0xffffffff80faf467 at vm_map_wire+0x67
#8 0xffffffff80fa6665 at vslock+0x75
#9 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#10 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#11 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#12 0xffffffff80bda3bf at sysctl_root+0x22f
#13 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#14 0xffffffff80bda8d5 at sys___sysctl+0x65
#15 0xffffffff8112bd71 at amd64_syscall+0x451
#16 0xffffffff810f9f3b at fast_syscall_common+0xf8
vm_map_clip_start: map 0xfffff800b6311490 entry 0xfffff8022078b360 start 0x1ca0a5624000 with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80fad642 at vm_map_clip_start+0x42
#3 0xffffffff80faf5ec at vm_map_wire_locked+0x12c
#4 0xffffffff80faf467 at vm_map_wire+0x67
#5 0xffffffff80fa6665 at vslock+0x75
#6 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#7 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#8 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#9 0xffffffff80bda3bf at sysctl_root+0x22f
#10 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#11 0xffffffff80bda8d5 at sys___sysctl+0x65
#12 0xffffffff8112bd71 at amd64_syscall+0x451
#13 0xffffffff810f9f3b at fast_syscall_common+0xf8
uma_zalloc_debug: zone "MAP ENTRY" with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80f999c4 at uma_zalloc_debug+0x34
#3 0xffffffff80f99517 at uma_zalloc_arg+0x27
#4 0xffffffff80fb3811 at vm_map_entry_clone+0x1a1
#5 0xffffffff80fad6cb at vm_map_clip_start+0xcb
#6 0xffffffff80faf5ec at vm_map_wire_locked+0x12c
#7 0xffffffff80faf467 at vm_map_wire+0x67
#8 0xffffffff80fa6665 at vslock+0x75
#9 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#10 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#11 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#12 0xffffffff80bda3bf at sysctl_root+0x22f
#13 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#14 0xffffffff80bda8d5 at sys___sysctl+0x65
#15 0xffffffff8112bd71 at amd64_syscall+0x451
#16 0xffffffff810f9f3b at fast_syscall_common+0xf8
vm_map_clip_end: map 0xfffff800b6311490 entry 0xfffff8022078b360 end 0x1ca0a5625000 with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80fad742 at vm_map_clip_end+0x42
#3 0xffffffff80faf602 at vm_map_wire_locked+0x142
#4 0xffffffff80faf467 at vm_map_wire+0x67
#5 0xffffffff80fa6665 at vslock+0x75
#6 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#7 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#8 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#9 0xffffffff80bda3bf at sysctl_root+0x22f
#10 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#11 0xffffffff80bda8d5 at sys___sysctl+0x65
#12 0xffffffff8112bd71 at amd64_syscall+0x451
#13 0xffffffff810f9f3b at fast_syscall_common+0xf8
uma_zalloc_debug: zone "MAP ENTRY" with 1 sleep inhibitors
stack backtrace:
#0 0xffffffff80c4404c at witness_debugger+0x6c
#1 0xffffffff80c45959 at witness_warn+0x4c9
#2 0xffffffff80f999c4 at uma_zalloc_debug+0x34
#3 0xffffffff80f99517 at uma_zalloc_arg+0x27
#4 0xffffffff80fb3811 at vm_map_entry_clone+0x1a1
#5 0xffffffff80fad7cb at vm_map_clip_end+0xcb
#6 0xffffffff80faf602 at vm_map_wire_locked+0x142
#7 0xffffffff80faf467 at vm_map_wire+0x67
#8 0xffffffff80fa6665 at vslock+0x75
#9 0xffffffff80bda6aa at sysctl_wire_old_buffer+0x4a
#10 0xffffffff80e19083 at sysctl_ip6_mcast_filters+0x93
#11 0xffffffff80bdb05c at sysctl_root_handler_locked+0x9c
#12 0xffffffff80bda3bf at sysctl_root+0x22f
#13 0xffffffff80bdaac6 at userland_sysctl+0x1b6
#14 0xffffffff80bda8d5 at sys___sysctl+0x65
#15 0xffffffff8112bd71 at amd64_syscall+0x451
#16 0xffffffff810f9f3b at fast_syscall_common+0xf8
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
failed: atf-check failed; see the output of the test for details  [5.101s]
sys/netinet6/redirect:valid_redirect  ->  epair0a: Ethernet address: ce:dd:af:dd:03:74
epair0b: Ethernet address: 5e:a1:cb:63:59:fa
epair0a: link state changed to UP
epair0b: link state changed to UP
panic: CURVNET_SET at /usr/src/sys/netinet6/nd6_nbr.c:1680 nd6_queue_timer() curvnet=0 vnet=0xfffff801966b6400
cpuid = 1
time = 1773254510
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe008bc5dc00
vpanic() at vpanic+0x136/frame 0xfffffe008bc5dd30
panic() at panic+0x43/frame 0xfffffe008bc5dd90
nd6_queue_timer() at nd6_queue_timer+0x207/frame 0xfffffe008bc5de10
softclock_call_cc() at softclock_call_cc+0x19b/frame 0xfffffe008bc5dec0
softclock_thread() at softclock_thread+0xc6/frame 0xfffffe008bc5def0
fork_exit() at fork_exit+0x82/frame 0xfffffe008bc5df30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe008bc5df30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
[ thread pid 2 tid 100031 ]
Stopped at      kdb_enter+0x33: movq    $0,0x15ece52(%rip)
db:0:kdb.enter.panic> show pcpu
cpuid        = 1
dynamic pcpu = 0xfffffe008d3bdd80
curthread    = 0xfffff80102422780: pid 2 tid 100031 critnest 1 "clock (0)"
curpcb       = 0xfffff80102422cd0
fpcurthread  = none
idlethread   = 0xfffff8010240c000: tid 100004 "idle: cpu1"
self         = 0xffffffff82a11000
curpmap      = 0xffffffff81da1d00
tssp         = 0xffffffff82a11384
rsp0         = 0xfffffe008bc5e000
kcr3         = 0x8000000002944002
ucr3         = 0xffffffffffffffff
scr3         = 0x17ebd0e76
gs32p        = 0xffffffff82a11404
ldt          = 0xffffffff82a11444
tss          = 0xffffffff82a11434
curvnet      = 0
spin locks held:
db:0:kdb.enter.panic>  reset
Uptime: 49m26s
+ rc=0
+ echo 'bhyve return code = 0'
bhyve return code = 0
+ sudo /usr/sbin/bhyvectl '--vm=testvm-main-amd64-28019' --destroy
+ sh -ex freebsd-ci/scripts/test/extract-meta.sh
+ METAOUTDIR=meta-out
+ rm -fr meta-out
+ mkdir meta-out
+ tar xvf meta.tar -C meta-out
x ./
x ./auto-shutdown
x ./disable-dtrace-tests.sh
x ./run-kyua.sh
x ./run.sh
x ./disable-notyet-tests.sh
x ./disable-zfs-tests.sh
+ rm -f test-report.txt test-report.xml
+ mv 'meta-out/test-report.*' .
mv: rename meta-out/test-report.* to ./test-report.*: No such file or directory
+ report=test-report.xml
+ [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report.xml ]
+ rm -f disk-cam
+ jot 5
+ rm -f disk1
+ rm -f disk2
+ rm -f disk3
+ rm -f disk4
+ rm -f disk5
+ rm -f disk-test.img
[PostBuildScript] - [INFO] Executing post build scripts.
[FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins3161656848746588619.sh
+ ./freebsd-ci/artifact/post-link.py
Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '738aea3387d831c95024fd28076dadde132ceaec', 'branch': 'main', 'target': 'amd64', 'target_arch': 'amd64', 'link_type': 'latest_tested'}
"Link created: main/latest_tested/amd64/amd64 -> ../../738aea3387d831c95024fd28076dadde132ceaec/amd64/amd64\n"
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Checking for post-build
Performing post-build step
Checking if email needs to be generated
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Sending mail from default account using System Admin e-mail address