ufs2 / softupdates / ZFS / disk write cache

Dan Naumov dan.naumov at gmail.com
Sun Jun 21 10:03:18 UTC 2009

On Sun, Jun 21, 2009 at 12:27 PM, Erik Trulsson<ertr1013 at student.uu.se> wrote:
> On Sun, Jun 21, 2009 at 05:18:39AM +0300, Dan Naumov wrote:
>> Uh oh.... After some digging around, I found the following quote: "ZFS
>> is designed to work with storage devices that manage a disk-level
>> cache. ZFS commonly asks the storage device to ensure that data is
>> safely placed on stable storage by requesting a cache flush." at
>> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide I
>> guess this might be somewhat related to why in the "disk cache
>> disabled" scenario, ZFS suffers bigger losses than UFS2.
> If that quote is correct (and I have no real reason to doubt it) then
> it should probably be safe to enable the disk's write cache when used with
> ZFS.  (That would make sense since UFS/FFS was originally designed to work
> with an older generation of disks that did not do any significant amount
> of write-caching (partly due to having very little cache on them), while
> ZFS has been designed to be used on modern hardware, and to be reliable even
> on cheap consumer-grade disks.)

Actually, now that I think of it, this could be pretty big. If using
ZFS on a disk will cause the disk to flush the cache every 5 seconds,
wouldn't that mean that the sections of the cache that hold data from
the UFS partition get flushed to disk as well, mostly eleminating the
entire "disk cache lying = softupdates inconsistent" problem
altogether? The most important part of this is obviously, whether the
"ZFS forces cache flushes every 5 seconds) thing works in all cases
(like mine, where I use ZFS on a slice) and not only those where ZFS
is given direct access to the disk. Anyone knowledgable in the ways of
FreeBSD ZFS implementation care to chip in? :)

Dan Naumov

More information about the freebsd-fs mailing list