Debug stalling Raspberry
Johan Henselmans
johan at netsense.nl
Wed Aug 21 07:48:55 UTC 2013
On 20 aug. 2013, at 22:10, Martin Laabs <mailinglists at martinlaabs.de> wrote:
> Hi,
>
> currently I run r254441 on a raspberry pi b and every time I run portsnap
> the cpu stopps running.
> This happens during the snapshot verify while around 25k files are gunziped
> and sha256ed (file after file of cause). I can reproduce this but the Pi
> does not hang reproducible at the same file but the last processed file is
> different from try to try.
I have exactly the same experience on BBB, also with portsnap. The only way I could get portsnap to finish is by locating /var/db/portsnap on NFS.
I am still testing what is causing it. I have used class 4 and class 10 cards, both freeze.
I have the serial console connected to screen, and nothing is displayed during the freeze.
I do have an error about the keyboard during startup, as Tim mention that pressing the keyboard would continue the BBB. I have not tested that, as I do not see a console on my HDMI monitor on the BBB.
(r254571)
=========
link_elf: symbol genkbd_get_fkeystr undefined
link_elf: symbol genkbd_get_fkeystr undefined
aintc0: Spurious interrupt detected (0xffffffff)
=========
Another message I got while starting up:
=========
lock order reversal:
1st 0xc09aa0bc pmap (pmap) @ /usr/src/sys/arm/arm/pmap-v6.c:2967
2nd 0xc07c5fc0 kmem vm object (kmem vm object) @ /usr/src/sys/vm/vm_kern.c:344
KDB: stack backtrace:
db_trace_self() at db_trace_self
pc = 0xc05293f8 lr = 0xc022db88 (db_trace_self_wrapper+0x30)
sp = 0xc2ab4930 fp = 0xc2ab4a48
r10 = 0xc09aa0bc
db_trace_self_wrapper() at db_trace_self_wrapper+0x30
pc = 0xc022db88 lr = 0xc038d150 (kdb_backtrace+0x38)
sp = 0xc2ab4a50 fp = 0xc2ab4a58
r4 = 0xc065ed14 r5 = 0xc05a6f1c
r6 = 0xc058a3b3 r7 = 0xc05abd59
kdb_backtrace() at kdb_backtrace+0x38
pc = 0xc038d150 lr = 0xc03a72c0 (witness_checkorder+0xddc)
sp = 0xc2ab4a60 fp = 0xc2ab4ab0
r4 = 0xc05a7fd4
witness_checkorder() at witness_checkorder+0xddc
pc = 0xc03a72c0 lr = 0xc0355768 (_rw_wlock_cookie+0x7c)
sp = 0xc2ab4ab8 fp = 0xc2ab4ae0
r4 = 0x00000158 r5 = 0xc05a6f19
r6 = 0xc07c5fd0 r7 = 0xc07c5fc0
r8 = 0xc07c5fc0 r9 = 0x00000101
r10 = 0x00000000
_rw_wlock_cookie() at _rw_wlock_cookie+0x7c
pc = 0xc0355768 lr = 0xc0504794 (kmem_back+0x68)
sp = 0xc2ab4ae8 fp = 0xc2ab4b28
r4 = 0xc07c5fc0 r5 = 0x00001000
r6 = 0x00000000 r7 = 0xc07c5fc0
r8 = 0x00000101
kmem_back() at kmem_back+0x68
pc = 0xc0504794 lr = 0xc05046f0 (kmem_malloc+0x6c)
sp = 0xc2ab4b30 fp = 0xc2ab4b48
r4 = 0xc0661780 r5 = 0x00001000
r6 = 0x00000000 r7 = 0x00000101
r8 = 0xc04fd5e0 r9 = 0x00000101
r10 = 0x00000000
kmem_malloc() at kmem_malloc+0x6c
pc = 0xc05046f0 lr = 0xc04fd600 (page_alloc+0x20)
sp = 0xc2ab4b50 fp = 0xc2ab4b50
r4 = 0xc09d3cc0 r5 = 0x00000001
r6 = 0x00000000 r7 = 0xc09d3cd0
page_alloc() at page_alloc+0x20
pc = 0xc04fd600 lr = 0xc04fd094 (keg_alloc_slab+0xb4)
sp = 0xc2ab4b58 fp = 0xc2ab4b80
keg_alloc_slab() at keg_alloc_slab+0xb4
pc = 0xc04fd094 lr = 0xc04fdcd0 (keg_fetch_slab+0x148)
sp = 0xc2ab4b88 fp = 0xc2ab4bc0
r4 = 0xc09d3cc0 r5 = 0xc09ce408
r6 = 0x00000001 r7 = 0xc09ce360
r8 = 0x00000000 r9 = 0xc09ce3f8
r10 = 0x00000000
keg_fetch_slab() at keg_fetch_slab+0x148
pc = 0xc04fdcd0 lr = 0xc04fe0c4 (zone_fetch_slab+0x64)
sp = 0xc2ab4bc8 fp = 0xc2ab4be0
r4 = 0x00000001 r5 = 0xc09ce360
r6 = 0xc09d3cc0 r7 = 0xc09d3cc0
r8 = 0x00000001 r9 = 0xc2ff4fa8
r10 = 0x00000002
zone_fetch_slab() at zone_fetch_slab+0x64
pc = 0xc04fe0c4 lr = 0xc04fe150 (zone_import+0x4c)
sp = 0xc2ab4be8 fp = 0xc2ab4c28
r4 = 0xc2ff4fac r5 = 0xc05a621a
r6 = 0x00000001 r7 = 0xc09d3cc0
r8 = 0x00000000
zone_import() at zone_import+0x4c
pc = 0xc04fe150 lr = 0xc04fbdc0 (uhub2: 4 ports with 4 removable, self powered
uma_zalloc_arg+0x2a0)
sp = 0xc2ab4c30 fp = 0xc2ab4c70
r4 = 0x00000001 r5 = 0xc05a621a
r6 = 0xc09b0e0c r7 = 0xc04fe104
r8 = 0xc09ce360 r9 = 0xc09ce418
r10 = 0xc09b0e00
uma_zalloc_arg() at uma_zalloc_arg+0x2a0
pc = 0xc04fbdc0 lr = 0xc053349c (pmap_alloc_l2_bucket+0x1b4)
sp = 0xc2ab4c78 fp = 0xc2ab4ca0
r4 = 0xc05abd56 r5 = 0xc09999f8
r6 = 0xc09999f4 r7 = 0xc07c0de8
r8 = 0xc05abd56 r9 = 0xc09abaac
r10 = 0xc09abb38
pmap_alloc_l2_bucket() at pmap_alloc_l2_bucket+0x1b4
pc = 0xc053349c lr = 0xc0533158 (pmap_copy+0x158)
sp = 0xc2ab4ca8 fp = 0xc2ab4ce0
r4 = 0xc09aba9c r5 = 0x20049000
r6 = 0xc05abd56 r7 = 0x2002e000
r8 = 0x0001b000 r9 = 0xc09964b8
r10 = 0x0001b000
pmap_copy() at pmap_copy+0x158
pc = 0xc0533158 lr = 0xc050a660 (vmspace_fork+0x790)
sp = 0xc2ab4ce8 fp = 0xc2ab4d20
r4 = 0xc09aa000 r5 = 0x00000000
r6 = 0x2002e000 r7 = 0xc099c500
r8 = 0xc09ab9e0 r9 = 0xc099df50
r10 = 0x0001b000
vmspace_fork() at vmspace_fork+0x790
pc = 0xc050a660 lr = 0xc0327004 (fork1+0x1a4)
sp = 0xc2ab4d28 fp = 0xc2ab4d98
r4 = 0xc2ffc960 r5 = 0x00000000
r6 = 0xc2fc5000 r7 = 0x0000000c
r8 = 0xc2fc5320 r9 = 0xc2ffcc80
r10 = 0xc2ab4dac
fork1() at fork1+0x1a4
pc = 0xc0327004 lr = 0xc0326e40 (sys_fork+0x24)
sp = 0xc2ab4da0 fp = 0xc2ab4db8
r4 = 0xc2ffcc80 r5 = 0x00000000
r6 = 0x00000000 r7 = 0x00000000
r8 = 0xc2ab4e10 r9 = 0xc2fc5320
r10 = 0x00000000
sys_fork() at sys_fork+0x24
pc = 0xc0326e40 lr = 0xc0538ee4 (swi_handler+0x284)
sp = 0xc2ab4dc0 fp = 0xc2ab4e58
r4 = 0xc2ffcc80 r5 = 0x00000000
swi_handler() at swi_handler+0x284
pc = 0xc0538ee4 lr = 0xc052aa54 (swi_entry+0x2c)
sp = 0xc2ab4e60 fp = 0xbfffec18
r4 = 0x00030998 r5 = 0x2080d020
r6 = 0x00000000 r7 = 0x00000002
r8 = 0x00000003 r9 = 0x2080d020
swi_entry() at swi_entry+0x2c
pc = 0xc052aa54 lr = 0xc052aa54 (swi_entry+0x2c)
sp = 0xc2ab4e60 fp = 0xbfffec18
Unable to unwind further
=========
>
> There is, at least at the video console, no kernel panic. The kernel itself
> still responds to icmp ping packages and echos the keyboard output. But
> everything else does not work. (I know this behavior from disconnected
> harddisks containing the kernel/system)
> The current consumption also dropps around 100mA.
>
> It is very interesting that this behavior is not limited to the internal
> MMC card but also occurs when the data is stored external on an usb stick.
>
> My question is how to debug this bug since I have no idea where to start.
>> From my experience with bare metal arm systems I would start connecting a
> jtag debugger but I am afraid getting all the symbols mapped correct to the
> gdb. And even if this works - what should I monitor and what should I test for?
> There might be however a more simple solution. So any suggestion is welcome.
>
> If you can reproduce this bug I also would be very happy.
>
> Best regards,
> Martin Laabs
>
> _______________________________________________
> freebsd-arm at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-arm
> To unsubscribe, send any mail to "freebsd-arm-unsubscribe at freebsd.org"
Johan Henselmans
johan at netsense.nl
More information about the freebsd-arm
mailing list