Per-mount syncer threads and fanout for pagedaemon cleaning

Kostik Belousov kostikbel at gmail.com
Tue Dec 27 16:58:08 UTC 2011


On Tue, Dec 27, 2011 at 05:05:04PM +0100, Attilio Rao wrote:
> 2011/12/27 Giovanni Trematerra <giovanni.trematerra at gmail.com>:
> > On Mon, Dec 26, 2011 at 9:24 PM, Venkatesh Srinivas
> > <vsrinivas at dragonflybsd.org> wrote:
> >> Hi!
> >>
> >> I've been playing with two things in DragonFly that might be of interest
> >> here.
> >>
> >> Thing #1 :=
> >>
> >> First, per-mountpoint syncer threads. Currently there is a single thread,
> >> 'syncer', which periodically calls fsync() on dirty vnodes from every mount,
> >> along with calling vfs_sync() on each filesystem itself (via syncer vnodes).
> >>
> >> My patch modifies this to create syncer threads for mounts that request it.
> >> For these mounts, vnodes are synced from their mount-specific thread rather
> >> than the global syncer.
> >>
> >> The idea is that periodic fsync/sync operations from one filesystem should
> >> not
> >> stall or delay synchronization for other ones.
> >> The patch was fairly simple:
> >> http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/50e4012a4b55e1efc595db0db397b4365f08b640
> >>
> >
> > There's something WIP by attilio@ on that area.
> > you might want to take a look at
> > http://people.freebsd.org/~attilio/syncer_alpha_15.diff
> >
> > I don't know what hammerfs needs but UFS/FFS and buffer cache make a good
> > job performance-wise and so the authors are skeptical about the boost that such
> > a change can give. We believe that brain cycles need to be spent on
> > other pieces of the system such as ARC and ZFS.
> 
> More specifically, it is likely that focusing on UFS and buffer cache
> for performance is not really useful, we should drive our efforts over
> ARC and ZFS.
> Also, the real bottlenecks in our I/O paths are in GEOM
> single-threaded design, lack of unmapped I/O functionality, possibly
> lack of proritized I/O, etc.
Even if not useful for performance (this is possible), the change itself
is useful to provide better system behaviour in the case of failure.
E.g., slowly-responding or wedged NFS server, dying disk etc would
more limited impact with the patch then without it. It will not completely
solve the issue, since e.g. dirty buffers amount is not limited per-mount
point, only globally. But at least it covers significant part of the
problem.

Also, it should help with interactivity and load pikes at 30sec interval.

I remember that I had no major objections when I read the patch. I personally
would prefer to have it committed.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20111227/9ee9116c/attachment.pgp


More information about the freebsd-hackers mailing list