Journalling FS and Soft Updates comparision

Loren M. Lang lorenl at
Wed Feb 9 19:01:21 PST 2005

Traditionally, filesystems have been designed with the idea that the
data will always be written to disk safely and not much effort was put
into making then

Journalling Filesystems and Soft Updates are two different techniques
designed to solve the problem of keeping data consistent even in the
case of a major interruption like a power blackout.  Both work solely on
the meta data, not the real data.  This means increasing a file's size
is protected, but not neccessarily the data that's being written.  (Does
this also mean that the data will be written to free space before the
file size is increased so extraneous data won't be left in the file?)
Journally works be recording in a special place on the hard drive called
the journal every meta data change that it is about to execute before it
does it, then it updates all the meta data and finally marks the journal
completed.  Soft updates are simply a way to order meta data so that it
happens in a safe order.  An example is moving file a from directory x to
directory y would first delete file a from dir x, then add it to dir y.
If a crash happens in the middle, then the data becomes lost right?

Now this shouldn't be a big deal since it's harmless to anything else,
just some free space is eaten up.  Since all meta data updates have this
same kind of harmless behavior, that why fsck can be done in the
background now instead of foreground.

Now comparing the two, perfomance wise journalling has an advantage
since every group of meta data updates that are written to the journal
at the same time can be reordered to optimize the disk performance.  The
disk head just has to move across the disk in order instead of seeking
back and forth.  Now this performance is usually lost because the
journal is constantly needing to be updated and it probably lies in one
small ares of the disk.  The other benefit of the journal is very quick
fsck times since all it has do to it see what the journal was updating
and make sure it all completed.  Soft updates still require a full fsck,
but since it can be done in the background unlike journalling, it mean
even faster startup time, but more cpu and i/o time spent on it.  Now if
the journal of a journalling fs could be kept somewhere else, say, in
some kind of nvram, then journalling might be overall more efficient as
far as disk i/o and cpu time than soft updates.

I'm mainly just trying to get an understanding of these two techniques,
not neccessarily saying one is better.  In the real world, it's probably
very dependent on many other things like lot of random access vs.
sequential, many files and file ops per seconds, vs. mostly read-only
with noatime set, etc.
I sense much NT in you.
NT leads to Bluescreen.
Bluescreen leads to downtime.
Downtime leads to suffering.
NT is the path to the darkside.
Powerful Unix is.

Public Key:
Fingerprint: B3B9 D669 69C9 09EC 1BCD  835A FAF3 7A46 E4A3 280C

More information about the freebsd-fs mailing list