sparc64 hang with zfs v28
Roger Hammerstein
cheeky.m at live.com
Wed Mar 2 16:46:38 UTC 2011
I saw the announcement for zfs v28 so I updated an ultra60
to the top of the tree. a 'zfs list' or 'zpool status' or 'kldload zfs'
will hang my machine.
I can break to the debugger via the serial console after I enabled
the alternate break sequence.
Has anyone else tried the latest zfs with sparc64 machines ?
falcon# uname -a
FreeBSD falcon 9.0-CURRENT FreeBSD 9.0-CURRENT #2: Wed Mar 2 11:16:56 EST 2011 root at falcon:/usr/obj/usr/src/sys/GENERIC sparc64
falcon# kldload zfs
[HANG]
vmstat in a window hangs, doesn't print anything helpful:
vmstat::
0 0 0 457M 1920M 11 0 0 0 0 0 0 0 2279 219 413 0 1 99
0 0 0 457M 1920M 0 0 0 0 0 0 0 0 2271 117 372 0 0 100
0 0 0 457M 1920M 11 0 0 0 0 0 0 0 2274 173 393 0 1 99
1 0 0 460M 1913M 84 0 1 0 13 0 0 0 2331 241 855 0 5 95
[hang]
falcon# KDB: enter: Break sequence on console
[ thread pid 1013 tid 100058 ]
Stopped at kdb_enter+0x80: ta %xcc, 1
db>
db> ps
pid ppid pgrp uid state wmesg wchan cmd
1013 1006 1013 0 R+ CPU 1 kldload
1006 1003 1006 0 Ss+ pause 0xfffff80001886dd8 csh
1003 875 1003 0 Ss select 0xfffff8000142cc40 sshd
1002 998 1002 0 S+ nanslp 0xc0ac8c28 vmstat
998 995 998 0 Ss+ pause 0xfffff80001887240 csh
995 875 995 0 Ss select 0xfffff8000142ca40 sshd
994 987 994 0 S+ select 0xfffff800014c92c0 top
987 984 987 0 Ss+ pause 0xfffff80001889240 csh
984 875 984 0 Ss select 0xfffff8000142c740 sshd
980 975 980 0 S+ ttyin 0xfffff800011394a8 csh
976 1 1 0 S ttydcd 0xfffff800011390e8 getty
975 1 975 0 Ss+ wait 0xfffff8000179a000 login
974 1 974 0 Ss+ ttyin 0xfffff8000113b8a8 getty
973 1 973 0 Ss+ ttyin 0xfffff8000113bca8 getty
972 1 972 0 Ss+ ttyin 0xfffff800013d00a8 getty
971 1 971 0 Ss+ ttyin 0xfffff800013d04a8 getty
970 1 970 0 Ss+ ttyin 0xfffff800013d08a8 getty
969 1 969 0 Ss+ ttyin 0xfffff800011380a8 getty
968 1 968 0 Ss+ ttyin 0xfffff800011384a8 getty
967 1 967 0 Ss+ ttyin 0xfffff800011388a8 getty
894 1 894 0 Ss nanslp 0xc0ac8c28 cron
887 1 887 25 Ss pause 0xfffff80001651240 sendmail
883 1 883 0 Ss select 0xfffff800017a27c0 sendmail
875 1 875 0 Ss select 0xfffff8000142bb40 sshd
795 1 795 0 Ss select 0xfffff8000142b540 ntpd
591 1 591 0 Ss select 0xfffff8000142b240 syslogd
414 1 414 0 Ss select 0xfffff8000142ae40 devd
109 1 109 0 Ss pause 0xfffff80001568dd8 adjkerntz
18 0 0 0 DL - 0xc0ac7978 [schedcpu]
17 0 0 0 DL sdflush 0xc0c93350 [softdepflush]
16 0 0 0 DL vlruwt 0xfffff800010ca8d0 [vnlru]
15 0 0 0 DL syncer 0xc0c84978 [syncer]
14 0 0 0 DL psleep 0xc0c844a8 [bufdaemon]
9 0 0 0 DL pgzero 0xc0c96474 [pagezero]
8 0 0 0 DL psleep 0xc0c952b0 [vmdaemon]
7 0 0 0 DL psleep 0xc0c952ec [pagedaemon]
6 0 0 0 DL ccb_scan 0xc0aa7fb8 [xpt_thrd]
5 0 0 0 DL waiting_ 0xc0c874e0 [sctp_iterator]
13 0 0 0 DL - 0xc0ac7978 [yarrow]
4 0 0 0 DL - 0xc0ac3d60 [g_down]
3 0 0 0 DL - 0xc0ac3d58 [g_up]
2 0 0 0 DL - 0xc0ac3d48 [g_event]
12 0 0 0 RL (threaded) [intr]
100026 I [vec2022: sym1]
100025 I [vec2016: sym0]
100024 RunQ [vec2017: hme0]
100023 RunQ [swi0: uart uart+]
100022 I [vec2024: pcib0]
100021 I [vec2021: pcib0]
100020 I [swi6: task queue]
100019 I [swi6: Giant taskq]
100016 I [swi5: +]
100015 I [swi2: cambio]
100008 I [swi3: vm]
100007 I [swi1: netisr 0]
100006 RunQ [swi4: clock]
100005 Run CPU 0 [swi4: clock]
11 0 0 0 RL (threaded) [idle]
100004 CanRun [idle: cpu0]
100003 CanRun [idle: cpu1]
1 0 1 0 SLs wait 0xfffff800010c9a70 [init]
10 0 0 0 DL audit_wo 0xc0c927d8 [audit]
0 0 0 0 DLs (threaded) [kernel]
100027 D - 0xc0ac7978 [deadlkres]
100018 D - 0xfffff8000108d400 [thread taskq]
100017 D - 0xfffff8000108d480 [ffs_trim taskq]
100014 D - 0xfffff8000108d580 [kqueue taskq]
100012 D - 0xfffff8000108d600 [firmware taskq]
100000 D sched 0xc0ac3f18 [swapper]
db>
db> trace
Tracing pid 1013 tid 100058 td 0xfffff80001428cc0
uart_intr_rxready() at uart_intr_rxready+0xbc
scc_bfe_intr() at scc_bfe_intr+0xbc
intr_event_handle() at intr_event_handle+0x64
intr_execute_handlers() at intr_execute_handlers+0x8
intr_fast() at intr_fast+0x68
-- interrupt level=0xc pil=0 %o7=0xc0477e68 --
fixup_filename() at fixup_filename+0x4
witness_checkorder() at witness_checkorder+0x98
_mtx_lock_flags() at _mtx_lock_flags+0x110
_vm_map_lock_read() at _vm_map_lock_read+0x1c
vm_map_lookup() at vm_map_lookup+0x4c
vm_fault_hold() at vm_fault_hold+0x94
vm_fault() at vm_fault+0x14
trap_pfault() at trap_pfault+0x338
trap() at trap+0x3a8
-- fast data access mmu miss tar=0xc18d4000 %o7=0xc03fa894 --
opensolaris_utsname_init() at opensolaris_utsname_init+0x8c
linker_load_dependencies() at linker_load_dependencies+0x260
link_elf_load_file() at link_elf_load_file+0x5ac
linker_load_module() at linker_load_module+0xa30
kern_kldload() at kern_kldload+0xb8
kldload() at kldload+0x60
syscallenter() at syscallenter+0x270
syscall() at syscall+0x74
-- syscall (304, FreeBSD ELF64, kldload) %o7=0x100cbc --
userland() at 0x40475108
user trace: trap %o7=0x100cbc
pc 0x40475108, sp 0x7fdffffdc51
pc 0x100a90, sp 0x7fdffffe1d1
pc 0x40206fb4, sp 0x7fdffffe291
done
db>
Anyone else try it yet ?
(How can I show what pid 1013 is doing ?)
More information about the freebsd-sparc64
mailing list