nullfs and named pipes.

Robert Watson rwatson at
Mon Feb 19 14:01:19 UTC 2007

On Sun, 18 Feb 2007, Josef Karthauser wrote:

> On Fri, Feb 16, 2007 at 04:36:56PM +0200, Kostik Belousov wrote:
>>>>>    cvs diff: Diffing .
>>>>>    Index: null_subr.c
>>>>>    ===================================================================
>>>>>    RCS file: /home/ncvs/src/sys/fs/nullfs/null_subr.c,v
>>>>>    retrieving revision
>>>>>    diff -u -r1.48.2.1 null_subr.c
>>>>>    --- null_subr.c 13 Mar 2006 03:05:17 -0000
>>>>>    +++ null_subr.c 14 Feb 2007 00:02:28 -0000
>>>>>    @@ -235,6 +235,8 @@
>>>>> 	    xp->null_vnode = vp;
>>>>> 	    xp->null_lowervp = lowervp;
>>>>> 	    vp->v_type = lowervp->v_type;
>>>>>    +       if (vp->v_type == VSOCK || vp->v_type == VFIFO)
>>>>>    +               vp->v_un = lowervp->v_un;
>>>> I'm wondering is some reference counting needed there ?
>>> Yes, I find this a bit worrying also, but I don't know enough about how 
>>> nullfs works to reason about it.  What happens when a vnode in the bottom 
>>> layer has its on-disk reference count drop to zero -- is the vnode in the 
>>> top layer invalidated somehow?
>> Vnode reclamation from lower layer cannot do anithing for corresponding 
>> nullfs vnode, but that vnode has reference from nullfs vnode. On the other 
>> hand, can forced unmount proceed for lower layer ?
> Does know of any reason why I can't commit this as it is, at least for now. 
> It doesn't appear that it would break anything that works currently, and in 
> its current form it at least fixes named pipe functionality for the kinds of 
> cases that people would want to use it.

Well, the worry would be that you would be replacing a clean error on failure 
with an occasional panic, the normal symptom of a race condition.

I think I'm alright with the VFIFO case above, but I'm quite uncomfortable 
with the VSOCK case.  In particular, I suspect that if the socket is closed, 
v_un will be reset in the lower layer, but continue to be a stale pointer in 
the upper layer, leading to accessing free'd or re-allocated kernel memory 
resulting in much badness.  I've noticed tested this, but you might give it a 
try and see what happens.

Robert N M Watson
Computer Laboratory
University of Cambridge

More information about the freebsd-fs mailing list