6.1 panic after approx. 49 days uptime

Kostik Belousov kostikbel at gmail.com
Mon Jul 17 09:02:56 UTC 2006


On Sun, Jul 16, 2006 at 09:46:49AM +0100, Mark Knight wrote:
> In message <20060716084210.GL32624 at deviant.kiev.zoral.com.ua>, Kostik 
> Belousov <kostikbel at gmail.com> writes
> >On Sun, Jul 16, 2006 at 09:32:47AM +0100, Mark Knight wrote:
> >>Just awoke to fine my home server (6.1-RELEASE) had panicked during its
> >>daily update of /usr/ports with an uptime of 49 days.
> >>
> >>Stack trace is here:
> >>
> >>  <http://www.knigma.org.uk/scratch/crash_160706.txt>
> >>
> >>Looks file system related to me.  Any advice appreciated.
> >
> >If you still have the core dump at hands, go to frame #7 and post the
> >output of the commands "p *vp" and "p *(vp->v_mount)".
> 
> Appended to log file (in case of mail formatting) and reproduced here:
> 
> (kgdb) p *(vp)
> $3 = {v_type = VBAD, v_tag = 0xc0791704 "none", v_op = 0xc07d89e0, v_data = 
> 0x0, v_mount = 0x0,
>   v_nmntvnodes = {tqe_next = 0x0, tqe_prev = 0xc3250014}, v_un = {vu_mount 
>   = 0x0, vu_socket = 0x0,
>     vu_cdev = 0x0, vu_fifoinfo = 0x0}, v_hashlist = {le_next = 0x0, le_prev 
>     = 0xc295f570},
>   v_hash = 3269747, v_cache_src = {lh_first = 0x0}, v_cache_dst = 
>   {tqh_first = 0x0, tqh_last = 0xc335cbe0},
>   v_dd = 0x0, v_cstart = 0, v_lasta = 0, v_lastw = 0, v_clen = 0, v_lock = 
>   {lk_interlock = 0xc08073dc,
>     lk_flags = 64, lk_sharecount = 0, lk_waitcount = 0, lk_exclusivecount = 
>     0, lk_prio = 80,
>     lk_wmesg = 0xc07a24ed "ufs", lk_timo = 51, lk_lockholder = 0xffffffff, 
>     lk_newlock = 0x0},
>   v_interlock = {mtx_object = {lo_class = 0xc07e0644, lo_name = 0xc07a3a55 
>   "vnode interlock",
>       lo_type = 0xc07a3a55 "vnode interlock", lo_flags = 196608, lo_list = 
>       {tqe_next = 0x0,
>         tqe_prev = 0x0}, lo_witness = 0x0}, mtx_lock = 4, mtx_recurse = 0}, 
>         v_vnlock = 0xc335cc08,
>   v_holdcnt = 1, v_usecount = 0, v_iflag = 128, v_vflag = 0, v_writecount = 
>   0, v_freelist = {
>     tqe_next = 0xc3248990, tqe_prev = 0xc080d22c}, v_bufobj = {bo_mtx = 
>     0xc335cc2c, bo_clean = {bv_hd = {
>         tqh_first = 0x0, tqh_last = 0xc335cc74}, bv_root = 0x0, bv_cnt = 
>         0}, bo_dirty = {bv_hd = {
>         tqh_first = 0x0, tqh_last = 0xc335cc84}, bv_root = 0x0, bv_cnt = 
>         0}, bo_numoutput = 0, bo_flag = 0,
>     bo_ops = 0xc07e6564, bo_bsize = 8192, bo_object = 0x0, bo_synclist = 
>     {le_next = 0x0, le_prev = 0x0},
>     bo_private = 0xc335cbb0, __bo_vnode = 0xc335cbb0}, v_pollinfo = 0x0, 
>     v_label = 0x0}
> (kgdb) p *(vp->v_mount)
> Cannot access memory at address 0x0
> (kgdb)
> 
> Thanks,
Thank you for the data. As I see, the problem could be worked around by
the following patch:

Index: mount.h
===================================================================
RCS file: /usr/local/arch/ncvs/src/sys/sys/mount.h,v
retrieving revision 1.210
diff -u -r1.210 mount.h
--- mount.h	5 May 2006 19:32:35 -0000	1.210
+++ mount.h	16 Jul 2006 09:15:32 -0000
@@ -578,7 +578,7 @@
 	int _locked;							\
 	struct mount *_MP;						\
 	_MP = (MP);							\
-	if (VFS_NEEDSGIANT(_MP)) {					\
+	if (_MP != NULL && VFS_NEEDSGIANT(_MP)) {			\
 		mtx_lock(&Giant);					\
 		_locked = 1;						\
 	} else								\

What seems to be quite untrivial is testing. Did you had unmount
some filesystem before that panic happen ?

To reproduce the situation, the following cojunction of the events is needed:
1. you have free vnode pressure
2. some very active fs in unmounted
3. some further file activity is going on
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20060717/8f1be3b5/attachment.pgp


More information about the freebsd-stable mailing list