flush on close

Clifton Royston cliftonr at tikitechnologies.com
Fri Sep 12 12:39:03 PDT 2003


> Date: Fri, 12 Sep 2003 05:38:12 +1000 (EST)
> From: Bruce Evans <bde at zeta.org.au>
> Subject: Re: flush on close
> To: Eno Thereska <eno at andrew.cmu.edu>
> Cc: freebsd-fs at freebsd.org
> Message-ID: <20030912051818.B1339 at gamplex.bde.org>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
> 
> On Wed, 10 Sep 2003, Eno Thereska wrote:
> 
> > In FreeBSD 4.4, I am noticing a huge number of calls
> > to ffs_fsync() (in /sys/ufs/ffs/ffs_vnops.c)
> > when running a benchmark like Postmark.
> >
> > ffs_fsync() flushes all dirty buffers with an open file
> > to disk. Normally this function would be called
> > either because the application writer explicitly
> > flushes the file, or if the syncer daemon or buffer daemon
> > decide it's time for the dirty blocks to go to disk.
> >
> > Neither of these two options is happening. Files are opened and closed
> > very frequently though. I have a  suspicion that BSD is using the
> > "flush-on-close" semantic.
> >
> > Could someone confirm or reject this claim?
> > If confirmed, I am wondering how to get rid of it...
> 
> ffs_fsync() is (or should be) rarely called except as a result of
> applications calling fsync(2) or sync(2).  It is not normally called
> by the syncer daemon or buffer daemon (seems to be not at all in 4.4,
> though -current calls it from vfs-bio.c when there are too many dirty
> buffers, and benchmarks like postmark might trigger this).  In 4.4
> the only relevant VOP_FSYNC() seems to be the one in vinvalbuf().
> Using lots of vnodes might cause this to be called a lot, but this
> should only cause a lot of i/o in ffs_fsync() if a lot is really needed.
> Dirty buffers for vnodes which will soon be deleted are supposed to be
> discarded in ffs_fsync().  Benchmarks that do lots of i/o to short-lived
> files tend to cause too much physical i/o, but this is because the i/o
> is done by the buffer (?) deamon before ffs_fsync() can discard it.
> 
> Bruce

  Postmark does specifically try to exercise these aspects of the file
system by randomly creating/writing/closing/reading/deleting many very
small short-lived files in nested directories, causing a lot of
meta-data updates.  IOW, it will use a lot of both vnodes and inodes,
and cause directory data to be updated at an unusually high rate.

  Its specific intent is to simulate very high volume mail delivery and
mail access traffic to a maildir-style format, but it was also written
(by Netapp staff) to demonstrate the superiority of NetApp's WAFL file
system (over NFS) versus UNIX FFS under this usage pattern.  That's a
factor to keep in mind, though not a reason to ignore the results as
long as they're honest.

  BSD shouldn't use flush-on-close for the files, but IIRC the volume
of file creation and deletion may be triggering extra flushes of
directory blocks even under softupdates.  It's not the opens and
closes, it's the creation and deletion which will get you.
  -- Clifton

-- 
          Clifton Royston  --  cliftonr at tikitechnologies.com 
         Tiki Technologies Lead Programmer/Software Architect
Did you ever fly a kite in bed?  Did you ever walk with ten cats on your head?
  Did you ever milk this kind of cow?  Well we can do it.  We know how.
If you never did, you should.  These things are fun, and fun is good.
                                                                 -- Dr. Seuss


More information about the freebsd-hackers mailing list