leak of the vnodes

Kostik Belousov kostikbel at gmail.com
Wed Apr 7 09:59:34 UTC 2010


On Wed, Apr 07, 2010 at 09:00:44AM +0200, Petr Salinger wrote:
> 
> 
> On Wed, 7 Apr 2010, Kostik Belousov wrote:
> 
> >On Tue, Apr 06, 2010 at 10:01:56PM +0200, Petr Salinger wrote:
> >>>Can you try to get a backtrace at the points you have shown me ?
> >>
> >>All are similar to this, with ptyp5/ptyp6/ptyp7 name changes.
> >>
> >>a vnode 0xffffff0058978000: tag devfs, type VCHR
> >>    usecount 1, writecount 1, refcount 2 mountedhere 0xffffff0039cb0c00
> >>    flags (VI_DOOMED)
> >>    lock type devfs: EXCL by thread 0xffffff0039e16760 (pid 31427)
> >>        dev ptyp5
> >>KDB: stack backtrace:
> >>db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
> >>vgonel() at vgonel+0x424
> >>vgone() at vgone+0x39
> >>devfs_delete() at devfs_delete+0x1b1
> >>devfs_populate_loop() at devfs_populate_loop+0x228
> >>devfs_populate() at devfs_populate+0x42
> >>devfs_lookup() at devfs_lookup+0x258
> >>VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x7e
> >>lookup() at lookup+0x467
> >>namei() at namei+0x3ea
> >>vn_open_cred() at vn_open_cred+0x211
> >>kern_openat() at kern_openat+0x188
> >>syscall() at syscall+0x168
> >>Xfast_syscall() at Xfast_syscall+0xdc
> >>--- syscall (5, FreeBSD ELF64, open), rip = 0x800622097, rsp =
> >>0x7fffffffbb88, rbp = 0x7fffffffbe30 ---
> >>KDB: stack backtrace:
> >>db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
> >>vgonel() at vgonel+0x39d
> >>vgone() at vgone+0x39
> >>devfs_delete() at devfs_delete+0x1b1
> >>devfs_populate_loop() at devfs_populate_loop+0x228
> >>devfs_populate() at devfs_populate+0x42
> >>devfs_lookup() at devfs_lookup+0x258
> >>VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x7e
> >>lookup() at lookup+0x467
> >>namei() at namei+0x3ea
> >>vn_open_cred() at vn_open_cred+0x211
> >>kern_openat() at kern_openat+0x188
> >>syscall() at syscall+0x168
> >>Xfast_syscall() at Xfast_syscall+0xdc
> >>--- syscall (5, FreeBSD ELF64, open), rip = 0x800622097, rsp =
> >>0x7fffffffbb88, rbp = 0x7fffffffbe30 ---
> >>a vnode 0xffffff00589b5b40: tag devfs, type VCHR
> >>    usecount 1, writecount 1, refcount 2 mountedhere 0xffffff0028d75600
> >>    flags (VI_DOOMED)
> >>    lock type devfs: EXCL by thread 0xffffff0028cfb3b0 (pid 4529)
> >>        dev ptyp6
> >>KDB: stack backtrace:
> >>db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
> >>vgonel() at vgonel+0x424
> >>vgone() at vgone+0x39
> >>devfs_delete() at devfs_delete+0x1b1
> >>devfs_populate_loop() at devfs_populate_loop+0x228
> >>devfs_populate() at devfs_populate+0x42
> >>devfs_lookup() at devfs_lookup+0x258
> >>VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x7e
> >>lookup() at lookup+0x467
> >>namei() at namei+0x3ea
> >>vn_open_cred() at vn_open_cred+0x211
> >>kern_openat() at kern_openat+0x188
> >>syscall() at syscall+0x168
> >>Xfast_syscall() at Xfast_syscall+0xdc
> >
> >Why do you think that this is the problem ?
> 
> I used the attached diff, with hackish snooping
> on allocated/freed memory for vnodes. When the vp pointer have been
> logged as active1/active2, it is (much) later shown with
> dead_vnodeops in DUMP_VP().
Is there a lot of such /dev/ttyp* vnodes ? This indeed might be
suspicious. See below for description of how to check that the
vnodes are leaked or not.

BTW, the patch is racy, two things may cause the information to be
corrupted:
1. Addition to the lbuf[] is not protected, two threads
might select the same array element for storing the vnode pointer.

2. Removal from lbuf[] is racy too, since the pointer becomes
invalid immediately after uma_zfree(), and can be reused for some
other object, in particular, a vnode. Then, the removal from the
lbuf[] array might remove active element.

> 
> >One refcount unit is coming from devfs_delete() calling vhold() around
> >vgone() to prevent the vnode from disappearing under it.
> >
> >Second refcount unit comes in pair with use count unit. Use count
> >implies refcount, and use count is allocated when vnode is e.g. opened,
> >to account for struct file having a reference to struct vnode.
> >
> >What *might* happen for the device nodes you have shown, is that some
> >application has file opened for the node /dev/ttyp*, and then master pty
> >device closed. The slave /dev/ttyp* node is destroyed, that you see as
> >devfs_populate->devfs_delete() sequence. The vnode will be eventually
> >freed when corresponding file is closed.
> 
> >If you can confirm that some process has file opened with the reclaimed
> >vnode, then my theory will be confirmed.
> 
> What have to be logged ?
Please look at ddb command "show files", implemented in kern/kern_descrip.c,
lines 3284-3305 on HEAD. Instead of doing full dump, you can manually
inspect the output. Or, you can write some code that would search the
suspicious vnodes among the vnodes referenced from the processes
opened files. Vnode is probably leaked if use count is > 0 but no
process has vnode referenced by struct file.

> 
> >I think there should be something else going on.
> 
> May be both processes share file and memory space (RFMEM).

Which "both processes" you are refering to ?

Yes, I noted that you use bsd-ish /dev/ttyp*-style pseudoterminals,
and I know that glibc/kFreeBSD port uses linuxthreads for threading.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20100407/1454df89/attachment.pgp


More information about the freebsd-hackers mailing list