[Bug 203906] ZFS lockup, spa_namespace_lock
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Wed Oct 28 17:34:04 UTC 2015
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203906
--- Comment #2 from Adrian Palmer <uwyo.apalmer at gmail.com> ---
$ procstat -kk -a
PID TID COMM TDNAME KSTACK
1252 100078 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4
sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb
38986 100168 zpool - mi_switch+0xe1 sleepq_wait+0x3a
_sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e
zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114
kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
41360 101772 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4
sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb
41661 100159 gdb - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_wait6+0x5f4
sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb
41662 101773 zpool - mi_switch+0xe1 sleepq_wait+0x3a
_sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e
zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114
kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
42143 100174 zpool - mi_switch+0xe1 sleepq_wait+0x3a
_sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e
zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114
kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
42242 101757 zfs - mi_switch+0xe1 sleepq_wait+0x3a
_sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e
zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114
kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
45485 101761 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4
sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb
45553 100079 procstat - <running>
51117 101760 ls - mi_switch+0xe1 sleepq_wait+0x3a
_cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46
zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7
kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351
Xfast_syscall+0xfb
65149 101762 csh - mi_switch+0xe1
sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a tty_wait+0x1c
ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb dofileread+0x95
kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351 Xfast_syscall+0xfb
65496 101767 ls - mi_switch+0xe1 sleepq_wait+0x3a
_cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46
zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7
kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351
Xfast_syscall+0xfb
65624 101755 zfs - mi_switch+0xe1 sleepq_wait+0x3a
_sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e
zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114
kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb
There isn't much running on the machine at this point. I'm waiting for ZFS to
be a bit more stable.
I'll reconfigure the zpool failmode when I reboot. It may take a while for the
problem to replicate.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-fs
mailing list