UNEXPECTED SOFT UPDATE INCONSISTENCY; RUN fsck MANUALLY
zbeeble at gmail.com
Mon Sep 29 05:12:09 UTC 2008
On Mon, Sep 29, 2008 at 12:00 AM, Jeremy Chadwick <koitsu at freebsd.org>wrote:
> On Sun, Sep 28, 2008 at 11:30:01PM -0400, Zaphod Beeblebrox wrote:
> > However, as a core general purpose filesystem, it seems to have flaws,
> > the least of which is a re-separation of file cache and memory cache.
> > virtually doesn't matter for a fileserver, but is generally important in
> > general purpose local filesystem. ZFS also has a transactional nature
> > which probably, again, works well in a fileserver, but I find (as a local
> > filesystem) it introduces unpredicable delays as the buffer fills up and
> > then gets flushed en masse.
> I'm curious to know how Solaris deals with these problems, since the
> default filesystem (AFAIK) in OpenSolaris is now ZFS. CC'ing pjd@ who
> might have some insight there.
I certainly am not implying that it won't work as a local filesystem, simply
that this design choice may not be ideal for completely generalized local
workloads --- those same workloads that drove UN*X in general to unified
buffer caches... which appears to be implemented independently by every
major UN*X vendor... solaris may have even been the first.
The ARC is separate from the general VM cache in solaris, too, IIRC.
Solaris' UFS still uses a unified cache.
Most of the problems where ZFS runs the machine out of kernel memory (or
fights with other filesystems for memory, etc) are due to the effects of
it's non-unified cache. Solaris and new patches to FreeBSD seem to make
this play better, but the fundamental reason for unifying the filesystem and
memory cache was the payoff that local applications memory and file usage
would balance out better if the buffering of files and memory was not just
from the same pool of memory, but in fact the "same thing".
Historically, you had file cache being a percentage of memory (say 10%).
The next innovation (I seem to remember my HpUX 9 workstation doing this)
was to have the division of memory between file and memory caches move
dynamically. This was better but non-optimal. This is the state of affairs
now with ZFS too. The unified caches sprung up in UN*X derivatives shortly
thereafter ... where caching a file and caching memory were one in the
same. This is where UFS sits.
Expanding on my post, if the job is to serve network disk, the dynamic
division or unified cache strategies probably don't make too much
difference. The "thumper" offering from sun gives you 48 SATA disks, two
dual core opterons and 16G of memory. The obvious intention is that most of
that 16G is, in the end, cache for the files (all in 4U and all externally
accessible --- very cool, BTW).
But a general purpose machine is executing many of those libraries and
binaries and mmap()ing many of those files... both operations where the
unified strategy was designed to win.
More information about the freebsd-stable