shutdown -p now crashes

Jeremy Chadwick koitsu at FreeBSD.org
Wed Nov 19 23:34:05 PST 2008


On Thu, Nov 20, 2008 at 03:18:10PM +0800, Ganbold wrote:
> Jeremy Chadwick wrote:
>> On Thu, Nov 20, 2008 at 02:50:46PM +0800, Ganbold wrote:
>>   
>>> Ganbold wrote:
>>>     
>>>> #3  0xc06ab8b3 in vput (vp=0xc3d11ac8) at   
>>>> /usr/src/sys/kern/vfs_subr.c:2202
>>>> #4  0xc06a69e6 in dounmount (mp=0xc3e59b30, flags=524288,   
>>>> td=0xc3b31d20) at /usr/src/sys/kern/vfs_mount.c:1288
>>>> #5  0xc06aa493 in vfs_unmountall () at /usr/src/sys/kern/vfs_subr.c:2939
>>>> #6  0xc062b7f4 in boot (howto=16392) at   
>>>> /usr/src/sys/kern/kern_shutdown.c:400
>>>> #7  0xc062bc17 in reboot (td=0xc3b31d20, uap=0xeef55cfc) at   
>>>> /usr/src/sys/kern/kern_shutdown.c:172
>>>> #8  0xc08171f5 in syscall (frame=0xeef55d38) at   
>>>> /usr/src/sys/i386/i386/trap.c:1090
>>>> #9  0xc07fd710 in Xint0x80_syscall () at   
>>>> /usr/src/sys/i386/i386/exception.s:255
>>>> #10 0x00000033 in ?? ()
>>>> (kgdb)
>>>> ..
>>>>       
>>> I was able to reproduce the panic. I plugged in external ZFS/GELI HDD 
>>>  via USB
>>> and used 'zpool import' to import disk and did some 'ls' and then 
>>> used  'zpool export' to umount disk.
>>> Then I detached the disk from desktop and tried to "shutdown -p now". 
>>>  This way panic is
>>> reproduced 2 times.
>>>     
>>
>> Are we sure that the problem isn't the well-known "do not yank a device
>> out from underneathe the rest of the OS" problem, e.g. removing
>> removable devices while filesystems are still mounted?  If so, this
>> problem is very well-known, documented on my Wiki, and work is being
>> done in CURRENT to fix it (official ETA is 2009/02).
>>
>> I see geli(8) has a "detach" option, which you might need to do after
>> the "zpool export", being as the GEOM provider is still attached to
>> the USB HDD.  I would recommend trying that first.
>>   
>
> Thanks for suggestion :) In my zfs_mount and zfs_umount scripts I have:
>
> zfs_mount.sh
> #!/bin/sh
> geli attach -k /root/da0.key /dev/da0s1e
> zpool import tank1
> zpool import tank2
>
> zfs_umount.sh
> #!/bin/sh
> zpool export tank1
> zpool export tank2
> geli detach da0s1e.eli
>
> I hope that answers what you meant :)


Thanks -- it does.  I was worried the detach was missing and might
explain the anomaly.  It doesn't.  I hope others can help...


>>> More info:
>>>
>>> daemon% uname -an
>>> FreeBSD daemon.micom.mng.net 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE #8 
>>>  r185085M: Thu Nov 20 12:50:33 ULAT 2008       
>>> tsgan at daemon.micom.mng.net:/usr/obj/usr/src/sys/GDAEMON  i386
>>> daemon%
>>>
>>>
>>> <118>.
>>> <118>Writing entropy file:
>>> <118>.
>>> <118>Terminated
>>> <118>.
>>> <118>Nov 20 14:30:12 daemon syslogd: exiting on signal 15
>>> rootvp 0xc3d11ac8 v_usecount 2
>>> KDB: stack backtrace:
>>> db_trace_self_wrapper(c086b1ee,f2c6bbac,c06abb23,c086faa8,c3d11ac8,...) 
>>>  at db_trace_self_wrapper+0x26
>>> kdb_backtrace(c086faa8,c3d11ac8,2,b,0,...) at kdb_backtrace+0x29
>>> vrele(c3d11ac8,c3e6f690,0,0,0,...) at vrele+0x83
>>> fdfree(c3e6f690,c3e8f000,48077bf0,f2c6bc80,c062ccfe,...) at fdfree+0x8b6
>>> exit1(c3e6f690,0,f2c6bd2c,c0817245,c3e6f690,...) at exit1+0x5e9
>>> sys_exit(c3e6f690,f2c6bcfc,4,c3e6f690,f2c6bd2c,...) at sys_exit+0x1d
>>> syscall(f2c6bd38) at syscall+0x335
>>> Xint0x80_syscall() at Xint0x80_syscall+0x20
>>> --- syscall (1, FreeBSD ELF32, sys_exit), eip = 0x4810ed2f, esp =   
>>> 0xbfbfe6bc, ebp = 0xbfbfe6c8 ---
>>> rootvp 0xc3d11ac8 v_usecount 1
>>> KDB: stack backtrace:
>>> db_trace_self_wrapper(c086b1ee,f2c6bbac,c06abb23,c086faa8,c3d11ac8,...) 
>>>  at db_trace_self_wrapper+0x26
>>> kdb_backtrace(c086faa8,c3d11ac8,1,b,0,...) at kdb_backtrace+0x29
>>> vrele(c3d11ac8,c3e6f690,0,0,0,...) at vrele+0x83
>>> fdfree(c3e6f690,c3e8f000,48077bf0,f2c6bc80,c062ccfe,...) at fdfree+0x8c3
>>> exit1(c3e6f690,0,f2c6bd2c,c0817245,c3e6f690,...) at exit1+0x5e9
>>> sys_exit(c3e6f690,f2c6bcfc,4,c3e6f690,f2c6bd2c,...) at sys_exit+0x1d
>>> syscall(f2c6bd38) at syscall+0x335
>>> Xint0x80_syscall() at Xint0x80_syscall+0x20
>>> --- syscall (1, FreeBSD ELF32, sys_exit), eip = 0x4810ed2f, esp =   
>>> 0xbfbfe6bc, ebp = 0xbfbfe6c8 ---
>>> Waiting (max 60 seconds) for system process `vnlru' to stop...done
>>> Waiting (max 60 seconds) for system process `bufdaemon' to stop...done
>>> Waiting (max 60 seconds) for system process `syncer' to stop...
>>> Syncing disks, vnodes remaining...2 1 0 0 done
>>> All buffers synced.
>>> rootvp 0xc3d11ac8 v_usecount 1
>>> KDB: stack backtrace:
>>> db_trace_self_wrapper(c086b1ee,eef55ba8,c06abb23,c086faa8,c3d11ac8,...) 
>>>  at db_trace_self_wrapper+0x26
>>> kdb_backtrace(c086faa8,c3d11ac8,1,c3b31d20,eef55ba8,...) at   
>>> kdb_backtrace+0x29
>>> vrele(c3d11ac8,2,eef55bf0,c3b31d20,0,...) at vrele+0x83
>>> dounmount(c3e56b30,80000,c3b31d20,e3b61aec,0,...) at dounmount+0x491
>>> vfs_unmountall(c0869007,0,c086906b,117,eef55c50,...) at vfs_unmountall+0x33
>>> boot(c3b31d20,8,0,fffffffe,c3b31d20,...) at boot+0x444
>>> reboot(c3b31d20,eef55cfc,4,8048290,9e7c9054,...) at reboot+0x67
>>> syscall(eef55d38) at syscall+0x335
>>> Xint0x80_syscall() at Xint0x80_syscall+0x20
>>> --- syscall (55, FreeBSD ELF32, reboot), eip = 0x8050bb3, esp =   
>>> 0xbfbfe8ec, ebp = 0xbfbfe9b8 ---
>>> panic: vput: negative ref cnt
>>> cpuid = 0
>>> KDB: stack backtrace:
>>> db_trace_self_wrapper(c086b1ee,eef55b68,c062bd39,c08832ea,0,...) at   
>>> db_trace_self_wrapper+0x26
>>> kdb_backtrace(c08832ea,0,c086fac1,eef55b74,0,...) at kdb_backtrace+0x29
>>> panic(c086fac1,eef55ba8,c06abcd0,c3d11ac8,eef55b90,...) at panic+0x119
>>> vput(c3d11ac8,2,eef55bf0,c3b31d20,0,...) at vput+0xf3
>>> dounmount(c3e56b30,80000,c3b31d20,e3b61aec,0,...) at dounmount+0x4a6
>>> vfs_unmountall(c0869007,0,c086906b,117,eef55c50,...) at vfs_unmountall+0x33
>>> boot(c3b31d20,8,0,fffffffe,c3b31d20,...) at boot+0x444
>>> reboot(c3b31d20,eef55cfc,4,8048290,9e7c9054,...) at reboot+0x67
>>> syscall(eef55d38) at syscall+0x335
>>> Xint0x80_syscall() at Xint0x80_syscall+0x20
>>> --- syscall (55, FreeBSD ELF32, reboot), eip = 0x8050bb3, esp =   
>>> 0xbfbfe8ec, ebp = 0xbfbfe9b8 ---
>>> Uptime: 7m3s
>>> Physical memory: 1013 MB
>>> Dumping 54 MB: 39 23 7
>>> (kgdb) where
>>> #0  doadump () at pcpu.h:196
>>> #1  0xc062ba67 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:418
>>> #2  0xc062bd72 in panic (fmt=Variable "fmt" is not available.
>>> ) at /usr/src/sys/kern/kern_shutdown.c:574
>>> #3  0xc06ab8e3 in vput (vp=0xc3d11ac8) at /usr/src/sys/kern/vfs_subr.c:2211
>>> #4  0xc06a69e6 in dounmount (mp=0xc3e56b30, flags=524288, 
>>> td=0xc3b31d20)  at /usr/src/sys/kern/vfs_mount.c:1288
>>> #5  0xc06aa493 in vfs_unmountall () at /usr/src/sys/kern/vfs_subr.c:2948
>>> #6  0xc062b7f4 in boot (howto=16392) at   
>>> /usr/src/sys/kern/kern_shutdown.c:400
>>> #7  0xc062bc17 in reboot (td=0xc3b31d20, uap=0xeef55cfc) at   
>>> /usr/src/sys/kern/kern_shutdown.c:172
>>> #8  0xc0817245 in syscall (frame=0xeef55d38) at   
>>> /usr/src/sys/i386/i386/trap.c:1090
>>> #9  0xc07fd760 in Xint0x80_syscall () at   
>>> /usr/src/sys/i386/i386/exception.s:255
>>> #10 0x00000033 in ?? ()
>>> (kgdb) frame 4
>>> #4  0xc06a69e6 in dounmount (mp=0xc3e56b30, flags=524288, 
>>> td=0xc3b31d20)  at /usr/src/sys/kern/vfs_mount.c:1288
>>> 1288            vput(fsrootvp);
>>> (kgdb) p *mp
>>> $1 = {mnt_lock = {lk_object = {lo_name = 0xc086f234 "vfslock", 
>>> lo_type =  0xc086f234 "vfslock", lo_flags = 70844416, lo_witness_data 
>>> = {lod_list = {
>>>          stqe_next = 0x0}, lod_witness = 0x0}}, lk_interlock =   
>>> 0xc0956fa8, lk_flags = 1310720, lk_sharecount = 0, lk_waitcount = 0,  
>>> lk_exclusivecount = 1,
>>>    lk_prio = 80, lk_timo = 0, lk_lockholder = 0xc3b31d20, lk_newlock 
>>> =  0x0}, mnt_mtx = {lock_object = {lo_name = 0xc086f223 "struct mount 
>>> mtx",
>>>      lo_type = 0xc086f223 "struct mount mtx", lo_flags = 16973824,   
>>> lo_witness_data = {lod_list = {stqe_next = 0x0}, lod_witness = 0x0}}, 
>>>  mtx_lock = 4,
>>>    mtx_recurse = 0}, mnt_gen = 1, mnt_list = {tqe_next = 0x0, 
>>> tqe_prev = 0xc091c224}, mnt_op = 0xc0926080, mnt_vfc = 0xc0926040, 
>>> mnt_vnodecovered = 0x0,
>>>  mnt_syncer = 0x0, mnt_ref = 736, mnt_nvnodelist = {tqh_first =   
>>> 0xc3d11ac8, tqh_last = 0xc67fd350}, mnt_nvnodelistsize = 736,   
>>> mnt_writeopcount = 1,
>>>  mnt_kern_flag = 553648137, mnt_flag = 20480, mnt_noasync = 0, 
>>> mnt_opt = 0xc3c4cc90, mnt_optnew = 0x0, mnt_maxsymlinklen = 120, 
>>> mnt_stat = {
>>>    f_version = 537068824, f_type = 1, f_flags = 20480, f_bsize = 
>>> 2048,  f_iosize = 16384, f_blocks = 2538519, f_bfree = 2339932, 
>>> f_bavail =  2136851,
>>>    f_files = 659454, f_ffree = 651413, f_syncwrites = 0, 
>>> f_asyncwrites = 0, f_syncreads = 0, f_asyncreads = 0, f_spare = {0, 
>>> 0, 0, 0, 0, 0, 0, 0, 0, 0},
>>>    f_namemax = 255, f_owner = 0, f_fsid = {val = {1149777033,   
>>> 1944990175}}, f_charspare = '\0' <repeats 79 times>,
>>>    f_fstypename = "ufs", '\0' <repeats 12 times>, f_mntfromname =   
>>> "/dev/ad4s1a", '\0' <repeats 76 times>, f_mntonname = "/", '\0' 
>>> <repeats  86 times>},
>>>  mnt_cred = 0xc3e52700, mnt_data = 0xc3e52600, mnt_time = 1227162073, 
>>>  mnt_iosize_max = 131072, mnt_export = 0x0, mnt_label = 0x0,
>>>  mnt_hashseed = 2358371174, mnt_markercnt = 0, mnt_holdcnt = 0,   
>>> mnt_holdcntwaiters = 0, mnt_secondary_writes = 0,   
>>> mnt_secondary_accwrites = 2874,
>>>  mnt_gjprovider = 0x0, mnt_explock = {lk_object = {lo_name = 
>>> 0xc086f23c  "explock", lo_type = 0xc086f23c "explock", lo_flags = 
>>> 70844416,  lo_witness_data = {
>>>        lod_list = {stqe_next = 0x0}, lod_witness = 0x0}}, 
>>> lk_interlock = 0xc0956fc0, lk_flags = 0, lk_sharecount = 0, 
>>> lk_waitcount = 0,
>>>    lk_exclusivecount = 0, lk_prio = 80, lk_timo = 0, lk_lockholder =  
>>> 0xffffffff, lk_newlock = 0x0}}
>>> (kgdb) p fsrootvp             $2 = (struct vnode *) 0xc3d11ac8
>>> (kgdb) p *fsrootvp
>>> $3 = {v_type = VDIR, v_tag = 0xc0864e51 "ufs", v_op = 0xc0926280, 
>>> v_data  = 0xc3e5d000, v_mount = 0xc3e56b30, v_nmntvnodes = {tqe_next 
>>> = 0xc3d119b4,
>>>    tqe_prev = 0xc3e56b98}, v_un = {vu_mount = 0x0, vu_socket = 0x0,   
>>> vu_cdev = 0x0, vu_fifoinfo = 0x0, vu_yield = 0}, v_hashlist = 
>>> {le_next =  0x0,
>>>    le_prev = 0xc3d09da0}, v_hash = 2, v_cache_src = {lh_first = 0x0}, 
>>>  v_cache_dst = {tqh_first = 0x0, tqh_last = 0xc3d11af8}, v_dd = 0x0,  
>>> v_cstart = 0,
>>>  v_lasta = 0, v_lastw = 0, v_clen = 0, v_lock = {lk_object = {lo_name 
>>> =  0xc0864e51 "ufs", lo_type = 0xc0864e51 "ufs", lo_flags = 70844416,
>>>      lo_witness_data = {lod_list = {stqe_next = 0x0}, lod_witness =   
>>> 0x0}}, lk_interlock = 0xc0956510, lk_flags = 262208, lk_sharecount = 
>>> 0,
>>>    lk_waitcount = 0, lk_exclusivecount = 1, lk_prio = 80, lk_timo = 
>>> 51,  lk_lockholder = 0xc3b31d20, lk_newlock = 0x0}, v_interlock =   
>>> {lock_object = {
>>>      lo_name = 0xc086fb51 "vnode interlock", lo_type = 0xc086fb51 
>>> "vnode interlock", lo_flags = 16973824, lo_witness_data = {lod_list = 
>>>  {stqe_next = 0x0},
>>>        lod_witness = 0x0}}, mtx_lock = 3283295520, mtx_recurse = 0},  
>>> v_vnlock = 0xc3d11b20, v_holdcnt = 2, v_usecount = 0, v_iflag = 0,   
>>> v_vflag = 1,
>>>  v_writecount = 0, v_freelist = {tqe_next = 0x0, tqe_prev = 0x0},   
>>> v_bufobj = {bo_mtx = 0xc3d11b50, bo_clean = {bv_hd = {tqh_first =   
>>> 0xe3d02594,
>>>        tqh_last = 0xe3d025cc}, bv_root = 0xe3d02594, bv_cnt = 1},   
>>> bo_dirty = {bv_hd = {tqh_first = 0x0, tqh_last = 0xc3d11b9c}, bv_root 
>>> =  0x0,
>>>      bv_cnt = 0}, bo_numoutput = 0, bo_flag = 0, bo_ops = 0xc091ae00, 
>>>  bo_bsize = 16384, bo_object = 0xc106183c, bo_synclist = {le_next = 
>>> 0x0,
>>>      le_prev = 0x0}, bo_private = 0xc3d11ac8, __bo_vnode = 
>>> 0xc3d11ac8},  v_pollinfo = 0x0, v_label = 0x0, v_lockf = 0x0}
>>> (kgdb) p rootvnode
>>> $4 = (struct vnode *) 0x0
>>> (kgdb) p *rootvnode
>>> Cannot access memory at address 0x0
>>> (kgdb)
>>>
>>>
>>> -- 
>>> They are called computers simply because computation is the only   
>>> significant job that has so far been given to them.
>>> _______________________________________________
>>> freebsd-stable at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>>>     
>>
>>   
>
>
> -- 
> The light at the end of the tunnel may be an oncoming dragon.
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"

-- 
| Jeremy Chadwick                                jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-stable mailing list