kern/92785: Using exported filesystem on OS/2 NFS client causes filesystem freeze

Ulrich Spoerlein uspoerlein at
Fri Dec 15 07:20:54 PST 2006

The following reply was made to PR kern/92785; it has been noted by GNATS.

From: "Ulrich Spoerlein" <uspoerlein at>
To: "Kostik Belousov" <kostikbel at>
Cc: stable at, bug-followup at
Subject: Re: kern/92785: Using exported filesystem on OS/2 NFS client causes filesystem freeze
Date: Fri, 15 Dec 2006 16:18:32 +0100

 On 12/15/06, Kostik Belousov <kostikbel at> wrote:
 > Am I right that all you did was ls -l <root of nfs mount> ? Does OS/2
 > supports the notion of ".." directory ? Could you do just "ls -l .."
 > from nfs client and then try "stat <root of exported fs>" on the server
 > (i think it shall hang) ?
 Yes, you are right about the symptoms. We tried the following on the OS/2 Client
 mount export
 umount export
 mount export
 umount export
 this is all working fine, then we do a "dir" on the mounted FS
 mount i: /export/foo
 dir i:
 umount  <-- haning, as mountd can't process the RPC.
 > My hypothesis is that LOOKUP RPC for ".." causes directory vnode lock
 > leak in nfs_namei. After that, mountd hang is just consequence.
 So, I mounted from the OS/2 Client, ran a dir on the i: drive and then
 an stat(1) to the exported partition on the server. This stat would
 hang, here's the backtraces:
 db> ps
   pid  ppid  pgrp   uid   state   wmesg     wchan    cmd
 33017 88035 33017     0  S+      ufs      0xc8771880 stat
 23627 55476 23627     0  S+      bpf      0xc8e16c00 tcpdump
 88035 87505 88035     0  S+      pause    0xc882bcc4 tcsh
 87505 72558 87505  1000  S+      wait     0xc86f9218 su
 72558 89630 72558  1000  Ss+     pause    0xc873867c tcsh
 21229     1 21229     0  Ss      select   0xc09c10c4 mountd
 91293 79042 79042     0  S       -        0xc8668200 nfsd
 88479 79042 79042     0  S       -        0xc8668600 nfsd
 86952 79042 79042     0  S       -        0xc847cc00 nfsd
 83659 79042 79042     0  S       -        0xc8678200 nfsd
 79042     1 79042     0  Ss      accept   0xc8d649f6 nfsd
 55476 52005 55476     0  S+      pause    0xc8bcc24c tcsh
 52005 95193 52005  1000  S+      wait     0xc8734648 su
 db> show lockedvnods
 Locked vnodes
 0xc8771828: tag ufs, type VDIR
     usecount 0, writecount 0, refcount 4 mountedhere 0
     flags (VV_ROOT)
     v_object 0xc8a8a084 ref 0 pages 1
      lock type ufs: EXCL (count 1) by thread 0xc882f900 (pid 83659)
 with 1 pending#0 0xc0668bf9 at lockmgr+0x4ed
 #1 0xc078572e at ffs_lock+0x76
 #2 0xc0838287 at VOP_LOCK_APV+0x87
 #3 0xc06d663c at vn_lock+0xac
 #4 0xc06ca4ca at vget+0xc2
 #5 0xc06c24a9 at vfs_hash_get+0x8d
 #6 0xc07844af at ffs_vget+0x27
 #7 0xc078b253 at ufs_lookup+0xa4b
 #8 0xc083641b at VOP_CACHEDLOOKUP_APV+0x9b
 #9 0xc06bf499 at vfs_cache_lookup+0xb5
 #10 0xc0836347 at VOP_LOOKUP_APV+0x87
 #11 0xc06c3626 at lookup+0x46e
 #12 0xc0734fba at nfs_namei+0x40e
 #13 0xc0726d81 at nfsrv_lookup+0x1dd
 #14 0xc0736765 at nfssvc_nfsd+0x3d9
 #15 0xc07360b4 at nfssvc+0x18c
 #16 0xc0825a07 at syscall+0x25b
 #17 0xc0811f7f at Xint0x80_syscall+0x1f
         ino 2, on dev da1s2e
 db> tr 33017
 Tracing pid 33017 tid 100125 td 0xc86fd600
 sched_switch(c86fd600,0,1) at sched_switch+0x177
 mi_switch(1,0) at mi_switch+0x270
 sleepq_switch(c8771880,c0973440,0,c089798c,211,...) at sleepq_switch+0xc1
 sleepq_wait(c8771880,0,c87718f0,b7,c08929b8,...) at sleepq_wait+0x46
 msleep(c8771880,c0972590,50,c089c1c1,0,...) at msleep+0x279
 acquire(eb01694c,40,60000,c86fd600,0,...) at acquire+0x76
 lockmgr(c8771880,2002,c87718f0,c86fd600) at lockmgr+0x44e
 ffs_lock(eb0169a4) at ffs_lock+0x76
 VOP_LOCK_APV(c0943320,eb0169a4) at VOP_LOCK_APV+0x87
 vn_lock(c8771828,2002,c86fd600,c8771828) at vn_lock+0xac
 vget(c8771828,2002,c86fd600) at vget+0xc2
 vfs_hash_get(c87115c8,2,2,c86fd600,eb016abc,0,0) at vfs_hash_get+0x8d
 ffs_vget(c87115c8,2,2,eb016abc) at ffs_vget+0x27
 ufs_root(c87115c8,2,eb016b00,c86fd600,0,...) at ufs_root+0x19
 lookup(eb016ba0) at lookup+0x743
 namei(eb016ba0) at namei+0x39a
 kern_lstat(c86fd600,bfbfed99,0,eb016c74) at kern_lstat+0x47
 lstat(c86fd600,eb016d04) at lstat+0x1b
 syscall(3b,3b,3b,0,bfbfebf0,...) at syscall+0x25b
 Xint0x80_syscall() at Xint0x80_syscall+0x1f
 --- syscall (190, FreeBSD ELF32, lstat), eip = 0x2812d427, esp =
 0xbfbfeb9c, ebp = 0xbfbfec68 ---
 db> tr 83659
 Tracing pid 83659 tid 100115 td 0xc882f900
 sched_switch(c882f900,0,1) at sched_switch+0x177
 mi_switch(1,0) at mi_switch+0x270
 sleepq_switch(c8678200) at sleepq_switch+0xc1
 sleepq_wait_sig(c8678200) at sleepq_wait_sig+0x1d
 msleep(c8678200,c09c9f00,158,c088bec9,0,...) at msleep+0x26a
 nfssvc_nfsd(c882f900) at nfssvc_nfsd+0xe5
 nfssvc(c882f900,eaf8ad04) at nfssvc+0x18c
 syscall(3b,3b,3b,1,0,...) at syscall+0x25b
 Xint0x80_syscall() at Xint0x80_syscall+0x1f
 --- syscall (155, FreeBSD ELF32, nfssvc), eip = 0x280bd1b7, esp =
 0xbfbfe90c, ebp = 0xbfbfe928 ---
 Do you think you can fix it? Any idea why this seems to only happen
 with OS/2 Clients?

More information about the freebsd-bugs mailing list