zfs: Fatal trap 12: page fault while in kernel mode
Thomas Backman
serenity at exscape.org
Wed Jul 29 16:11:23 UTC 2009
On Jul 29, 2009, at 18:02, Andriy Gapon wrote:
> on 29/07/2009 17:36 Andriy Gapon said the following:
>> on 29/07/2009 17:10 Thomas Backman said the following:
>> [snip]
>>> (kgdb) fr 11
>> [snip]
>>> (kgdb) p *sx
>>> $8 = {lock_object = {lo_name = 0xffffffff80b5634c "zp->z_lock",
>>> lo_flags
>>> = 40894464 [0x2700000, btw], lo_data = 0, lo_witness = 0x0},
>>> sx_lock = 6}
>>>
>>> ... as you might notice, I'm mostly clueless as to what I'm doing
>>> here. :o
>>> Hope that helps (a bit), though.
>>
>> Yes, it does and a lot.
>> sx_lock = 6 means that this sx lock is destroyed:
>> #define
>> SX_LOCK_DESTROYED \
>> (SX_LOCK_SHARED_WAITERS | SX_LOCK_EXCLUSIVE_WAITERS)
>>
>> And lo_name tells that this is zp->z_lock.
>> This lock is destroyed in zfs_znode_cache_destructor.
>> Not enough knowledge for me to proceed further.
>
> So I guess that this is a case when zfs_znode_delete() was called on
> znode that
> was still referenced from some vnode. When the vnode gets reclaimed
> we get this
> problem.
> Could you please examine vp in frame 15 or 16?
>
> --
> Andriy Gapon
Sure.
Lots of info in that one:
(kgdb) fr 15
#15 0xffffffff803c839e in vgonel (vp=0xffffff0009252588) at vnode_if.h:
830
830 in vnode_if.h
(kgdb) p *vp
$3 = {v_type = VDIR, v_tag = 0xffffffff80b56347 "zfs", v_op =
0xffffffff80b5af00, v_data = 0xffffff00090d5000,
v_mount = 0xffffff0002cd7bc0, v_nmntvnodes = {tqe_next =
0xffffff00090f5000, tqe_prev = 0xffffff0009252960},
v_un = {vu_mount = 0x0, vu_socket = 0x0, vu_cdev = 0x0, vu_fifoinfo
= 0x0, vu_yield = 0}, v_hashlist = {
le_next = 0x0, le_prev = 0x0}, v_hash = 0, v_cache_src =
{lh_first = 0x0}, v_cache_dst = {tqh_first = 0x0,
tqh_last = 0xffffff00092525e8}, v_cache_dd = 0x0, v_cstart = 0,
v_lasta = 0, v_lastw = 0, v_clen = 0, v_lock = {
lock_object = {lo_name = 0xffffffff80b56347 "zfs", lo_flags =
91947008, lo_data = 0, lo_witness = 0x0},
lk_lock = 18446742974952890368, lk_timo = 51, lk_pri = 80},
v_interlock = {lock_object = {
lo_name = 0xffffffff806126d9 "vnode interlock", lo_flags =
16973824, lo_data = 0, lo_witness = 0x0},
mtx_lock = 4}, v_vnlock = 0xffffff0009252620, v_holdcnt = 1,
v_usecount = 0, v_iflag = 128, v_vflag = 0,
v_writecount = 0, v_freelist = {tqe_next = 0xffffff00090c3760,
tqe_prev = 0xffffff002c0bfc18}, v_bufobj = {
bo_mtx = {lock_object = {lo_name = 0xffffffff806126e9 "bufobj
interlock", lo_flags = 16973824, lo_data = 0,
lo_witness = 0x0}, mtx_lock = 4}, bo_clean = {bv_hd =
{tqh_first = 0x0, tqh_last = 0xffffff00092526c0},
bv_root = 0x0, bv_cnt = 0}, bo_dirty = {bv_hd = {tqh_first =
0x0, tqh_last = 0xffffff00092526e0},
bv_root = 0x0, bv_cnt = 0}, bo_numoutput = 0, bo_flag = 0,
bo_ops = 0xffffffff8079afa0, bo_bsize = 131072,
bo_object = 0x0, bo_synclist = {le_next = 0x0, le_prev = 0x0},
bo_private = 0xffffff0009252588,
__bo_vnode = 0xffffff0009252588}, v_pollinfo = 0x0, v_label =
0x0, v_lockf = 0x0}
Regards,
Thomas
More information about the freebsd-fs
mailing list