another zfs panic

Maxim Dounin mdounin at mdounin.ru
Thu Jul 30 01:57:49 UTC 2009


Hello!

Here is zfs panic I'm able to reproduce by running an scp from 
remote machine to zfs volume and 3 parallel untars of ports tree 
in cycle.  Not sure that everything is required, but the above 
workload triggers panic in several hours.

This is on fresh current with GENERIC kernel:

panic: sx_xlock() of destroyed sx @ 
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_rlock.c:535
cpuid = 6
KDB: enter: panic
[thread pid 36 tid 100071 ]
Stopped at      kdb_enter+0x3d: movq    $0,0x68a040(%rip)
db> bt
Tracing pid 36 tid 100071 td 0xffffff00040f3720
kdb_enter() at kdb_enter+0x3d
panic() at panic+0x17b
_sx_xlock() at _sx_xlock+0xfc
zfs_range_unlock() at zfs_range_unlock+0x38
zfs_get_data() at zfs_get_data+0xc1
zil_commit() at zil_commit+0x532
zfs_sync() at zfs_sync+0xa6
sync_fsync() at sync_fsync+0x13a
sync_vnode() at sync_vnode+0x157
sched_sync() at sched_sync+0x1d1
fork_exit() at fork_exit+0x12a
fork_trampoline() at fork_trampoline+0xe
--- trap 0, rip = 0, rsp = 0xffffff80e7ee3d30, rbp = 0 ---

Machine is otherwise idle.  The only zfs-related tuning applied is 
compression=gzip-9.

Please let me know if you want me to test some patches.

Maxim Dounin


More information about the freebsd-current mailing list