svn commit: r234482 - in head/sys: fs/msdosfs fs/nfsserver kern
sys
Attilio Rao
attilio at freebsd.org
Tue May 22 19:55:50 UTC 2012
2012/4/22 Pawel Jakub Dawidek <pjd at freebsd.org>:
> On Fri, Apr 20, 2012 at 06:50:44AM +0000, Kirk McKusick wrote:
>> Author: mckusick
>> Date: Fri Apr 20 06:50:44 2012
>> New Revision: 234482
>> URL: http://svn.freebsd.org/changeset/base/234482
>>
>> Log:
>> This change creates a new list of active vnodes associated with
>> a mount point. Active vnodes are those with a non-zero use or hold
>> count, e.g., those vnodes that are not on the free list. Note that
>> this list is in addition to the list of all the vnodes associated
>> with a mount point.
>>
>> To avoid adding another set of linkage pointers to the vnode
>> structure, the active list uses the existing linkage pointers
>> used by the free list (previously named v_freelist, now renamed
>> v_actfreelist).
>>
>> This update adds the MNT_VNODE_FOREACH_ACTIVE interface that loops
>> over just the active vnodes associated with a mount point (typically
>> less than 1% of the vnodes associated with the mount point).
> [...]
>> @@ -1099,6 +1128,14 @@ insmntque1(struct vnode *vp, struct moun
>> VNASSERT(mp->mnt_nvnodelistsize >= 0, vp,
>> ("neg mount point vnode list size"));
>> mp->mnt_nvnodelistsize++;
>> + KASSERT((vp->v_iflag & VI_ACTIVE) == 0,
>> + ("Activating already active vnode"));
>> + vp->v_iflag |= VI_ACTIVE;
>> + mtx_lock(&vnode_free_list_mtx);
>> + TAILQ_INSERT_HEAD(&mp->mnt_activevnodelist, vp, v_actfreelist);
>> + mp->mnt_activevnodelistsize++;
>> + mtx_unlock(&vnode_free_list_mtx);
>> + VI_UNLOCK(vp);
>> MNT_IUNLOCK(mp);
>> return (0);
>> }
>
>
> Now, for every vnode that is activated, it has to go through global
> mutex, which seems like scalability issue to me. With ZFS it is typical
> to have a lot of file systems and this global mutex was not needed
> before (well, it was needed, but only to get vnode from the free list).
>
> If we require vnode interlock to be held during v_actfreelist
> manipulation then why can't we use interlock+vnode_free_list_mtx when
> operating on the free list and interlock+per-mountpoint-lock when
> operating on mnt_activevnodelist?
I think this is the better idea for this case and it should really be fixed.
However, note that a per-mount lock here is far from being ideal as
you would contest a lot on the mountpoint if you have a lot of
activated vnodes.
Anyway the approach you propose doesn't introduce any scalability
difference than what we already had in place for pre-234482.
Also, it would be good to implement things like:
mtx_assert(MNT_MTX(mp), MA_OWNED);
by providing appropriate wrappers as we do in vnode interface.
Thanks,
Attilio
--
Peace can only be achieved by understanding - A. Einstein
More information about the svn-src-head
mailing list