[Bug 242961] Crashes (elf64_coredump … vm_object_set_writeable_dirty) after the recent vm patch series
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Sun Dec 29 15:57:59 UTC 2019
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=242961
Bug ID: 242961
Summary: Crashes (elf64_coredump …
vm_object_set_writeable_dirty) after the recent vm
patch series
Product: Base System
Version: CURRENT
Hardware: Any
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: kern
Assignee: bugs at FreeBSD.org
Reporter: greg at unrelenting.technology
Either the series with https://reviews.freebsd.org/D22885 or 'Correctly
implement PMAP_ENTER_NOREPLACE…' is causing my system to crash very soon after
entering the desktop (wayfire). (I reverted both 'PMAP_…' and everything from
'Remove some unused functions' to 'Don't update per-page activation counts…'
and that fixed the problem.)
A dump I got doesn't seem desktop/gpu specific in any way, but seems to point
at the coredump functionality:
Fatal trap 12: page fault while in kernel mode
cpuid = 2; apic id = 02
fault virtual address = 0x89
fault code = supervisor read data, page not present
instruction pointer = 0x20:0xffffffff806e8d84
stack pointer = 0x0:0xfffffe00cdc812c0
frame pointer = 0x0:0xfffffe00cdc812c0
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 63690 (cron)
trap number = 12
panic: page fault
cpuid = 2
time = 1577630640
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00cdc80f30
vpanic() at vpanic+0x17e/frame 0xfffffe00cdc80f90
panic() at panic+0x43/frame 0xfffffe00cdc80ff0
trap_fatal() at trap_fatal+0x386/frame 0xfffffe00cdc81050
trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00cdc810c0
trap() at trap+0x288/frame 0xfffffe00cdc811f0
calltrap() at calltrap+0x8/frame 0xfffffe00cdc811f0
--- trap 0xc, rip = 0xffffffff806e8d84, rsp = 0xfffffe00cdc812c0, rbp =
0xfffffe00cdc812c0 ---
vm_object_set_writeable_dirty() at vm_object_set_writeable_dirty+0x4/frame
0xfffffe00cdc812c0
vm_fault() at vm_fault+0x163f/frame 0xfffffe00cdc81400
vm_fault_quick_hold_pages() at vm_fault_quick_hold_pages+0x18a/frame
0xfffffe00cdc81480
vn_io_fault1() at vn_io_fault1+0x268/frame 0xfffffe00cdc815d0
vn_rdwr() at vn_rdwr+0x295/frame 0xfffffe00cdc816a0
vn_rdwr_inchunks() at vn_rdwr_inchunks+0x90/frame 0xfffffe00cdc81720
elf64_coredump() at elf64_coredump+0xbda/frame 0xfffffe00cdc81820
sigexit() at sigexit+0xba2/frame 0xfffffe00cdc81b00
postsig() at postsig+0x2f5/frame 0xfffffe00cdc81bc0
ast() at ast+0x2e7/frame 0xfffffe00cdc81bf0
doreti_ast() at doreti_ast+0x1f/frame 0x7fffffffdcb0
__curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
55 __asm("movq %%gs:%P1,%0" : "=r" (td) : "n" (offsetof(struct
pcpu,
(kgdb) bt
[…]
#8 <signal handler called>
#9 vm_object_set_writeable_dirty (object=0x0) at
/usr/src/sys/vm/vm_object.c:2236
#10 0xffffffff806d461f in vm_fault_dirty (entry=0xfffff8003205e000,
m=0xfffffe0008806d60, prot=<optimized out>,
fault_type=<optimized out>, fault_flags=0) at
/usr/src/sys/vm/vm_fault.c:249
#11 vm_fault (map=0xfffff8002e7a5000, vaddr=140737488240640, fault_type=1
'\001', fault_flags=0,
m_hold=0xfffffe00cdc814c0) at /usr/src/sys/vm/vm_fault.c:1358
#12 0xffffffff806d58ba in vm_fault_quick_hold_pages (map=0xfffff8002e7a5000,
addr=140737488240640,
len=<optimized out>, prot=1 '\001', ma=0xfffffe00cdc81490,
max_count=<optimized out>)
at /usr/src/sys/vm/vm_fault.c:1657
#13 0xffffffff80510908 in vn_io_fault1 (vp=<optimized out>,
uio=0xfffffe00cdc81608, args=0xfffffe00cdc81638,
td=0xfffff80056797000) at /usr/src/sys/kern/vfs_vnops.c:1111
#14 0xffffffff80510565 in vn_rdwr (rw=<optimized out>, vp=0xfffff801264f6000,
base=<optimized out>,
len=<optimized out>, offset=<optimized out>, segflg=<optimized out>,
ioflg=16641,
active_cred=0xfffff80018530e00, file_cred=0x0, aresid=0xfffffe00cdc816e0,
td=0xfffff80056797000)
at /usr/src/sys/kern/vfs_vnops.c:603
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list