ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...
Jason Usher
jusher71 at yahoo.com
Mon Sep 19 19:07:02 UTC 2011
--- On Sat, 9/17/11, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:
> 150KB is a relatively small file size given that the
> default zfs blocksize is 128KB. With so many files you
> should definitely max out RAM first before using SSDs as a
> l2arc. It is important to recognize that the ARC cache
> is not populated until data has been read. The cache
> does not help unless the data has been accessed several
> times. You will want to make sure that all metada and
> directories are cached in RAM. Depending on how the
> files are used/accessed you might even want to intentionally
> disable caching of file data.
How does one make sure that all metadata and directories are cached in RAM? Just run a 'find' on the filesystem, or a 'du' during the least busy time of day ? Or is there a more elegant, or more direct way to read all of that in ?
Further, if this (small files, lots of them) dataset benefits a lot from having the metadata and dirs read in, how can I KEEP that data in the cache, but not cache the file data (as you suggest, above) ?
Can I explicitly cache metadata/dirs in RAM, and cache file data in L2ARC ?
> Are the writes expected to be synchronous writes, or are
> they asynchronous? Are the writes expected to be
> primarily sequential (e.g. whole file), or is data
> accessed/updated in place?
It's a mix, I'm afraid.
More information about the freebsd-fs
mailing list