unionfs umount problem

Justin Hopper justin.hopper at offerfusion.com
Tue Aug 9 16:29:10 GMT 2005


Hello,

(I had posted this to -hackers, but -fs is probably a better place)

I had emailed a question to the -hackers list a few months ago, asking
what the current status was on unionfs and whether it was now somewhat
stable, since the man page still reports it as broken and without a
maintainer.  Some people on the list had reported that they had been
using unionfs without any problems for a few releases now, so I thought
I would try merging some of what we do with jails into a model using
unionfs and see if I ran into any problems.

The initial tests went fine of mounting an empty vnode, which would
represent client disk space, and mounting a unionfs mount of a complete
world into the vnode and starting the jail with this unionfs mounted
as / inside the jail.

The system runs fine, but I ran into a problem when taking it down.  The
system calls /etc/rc.shutdown, then kills the remaining processes from
the host environment, then umounts /proc and /dev.  No problems.  But
when the unionfs is called for umount, it reports EBUSY, and never
releases.  Forcing it to umount with "-f" hangs the calling process,
never to return.  Even upon system shutdown or reboot, the process will
not terminate, and will even prevent the box from rebooting sometimes.

I've checked that no processes are left in the jail and the prison
itself seems to be fully collapsed.  I also checked open file handles
with fstat and lsof.  I can't seem to find anything running that would
be tying up the mount point.  Could it be that something called vfs_busy
on the mountpoint, then terminated and never released it?  Is there any
tools available to check details like this?  Or even, to remove such a
bit from the mount so that it can be umounted safely?

A few other side notes on unionfs, it seems pretty solid other than the
above and the white-out support (if unionfs is the lower layer and vnode
is the top layer, you can "delete" files that only exist in the lower
layer and they do not show up in the upper vnode layer anymore, but are
still intact in the lower unionfs layer) is nice, though that's probably
a feature of ufs since ufs is controlling the upper layer.

A completely different question: We are thinking of removing the use of
vnodes to use as client disk space but need some way to control their
disk space usage.  Is there such a thing as directory quotas?  I'm sure
somebody must have asked this before, but I've never heard mention of
it.  I assume there must be some reason to avoid it, or somebody would
have put this in by now.

Thanks for any suggestions.
-- 
Justin Hopper
justin.hopper at offerfusion.com
OfferFusion, Inc.
AIM: OF Justin
800.618.6838 x.702



More information about the freebsd-fs mailing list