PATCH: Forcible delaying of UFS (soft)updates

Terry Lambert tlambert2 at mindspring.com
Thu Apr 17 17:08:50 PDT 2003


Marko Zec wrote:
> On Thursday 17 April 2003 18:40, Terry Lambert wrote:
> > Marko Zec wrote:
> > > David Schultz wrote:
> > > > I was referring to all the places where rushjob is set to or
> > > > incremented by syncer_maxdelay.  AFAIK, it should never be that
> > > > large.
> > >
> > > Hmm... Why? :)
> >
> > Increased latency; larger pool retention time, larger pool size,
> > more kernel memory tied up in dependency lists for longer, more
> > operations blocked because a dependency is already on the write
> > list, and so locked against modification.
> 
> Increasing "rushjob" has only a single consequence, and that is precisely a
> prompt flushing of dirty buffers. Are you sure we are talking about the same
> code here, rushjob in kern/vfs_subr.c, or something completely different?

I'm talking about what David Schultz was talking about when you
said "Hmm... Why?".  8-).

If you increase the syncer delay, you increase the amount of
unsynced data that's outstanding, on average, which is what
makes doing it dangerous.  Especially right now, where there
is a lot of code that doesn't expect a NULL return from the
kernel malloc, but the new kernel malloc can always return
NULL.  Any additional amount of memory pressure you force on
things through added latency delays is Bad(tm).


> > I'm wondering if this really helps some real world situation;
> > my gut feeling is that it doesn't, and it increases memory use
> > considerably, until it's flushed.
> 
> Ignoring fsync _really_ helps in real world situations, if you keep in mind
> that the original purpose of the patch is to keep the disk spinned down and
> save battery power.

I understand the original purpose; I'd still llike to see stats to
back up whether or not it accomplishes it.  8-).


> > I know that this will probably end up being observer influenced
> > enough to be merely anecdotal, but say gather two sets over an
> > extended period of use without powering the machine down; the
> > first set without the change, and the next set with the change.
> >
> > Either way it turns out, it would make a stronger case for or
> > against than just hand-waving.  8-).
> 
> Such a measurement could turn out to be relevant only if one would precisely
> define a test load.

Which is why I suggested a statistical load, instead.  FreeBSD
isn't well enough put together to allow you to replay an I/O
load like that, particularly a sparse one, so the best you are
going to be able to get is statistical significance.

Actually, if you think about it, it would be hard to prove that
even a repeatable sparse load was unbiased for a particular
result, so you're back to gathering statistical data anyway, to
create a couple of "representative" load sets.

> Obviously different results could be expected if the
> machine would be completely idle and if it would be not. Instead of just
> hand-waving, could we just more closely specify what we consider a relevant
> load for a battery-powered laptop? :)

I guess that would be "any load where the fsync patch helps"?

8-) 8-).

I think it would probaby be betweer to stall the soft updates
clock, flush the pending block I/O out (to unlock the buffers),
and then spin down the disks under OS control.  You could really
guarantee relevence in that case.  Anyone who complained could
pick their own relevence criteria, and hack the code.

-- Terry


More information about the freebsd-stable mailing list