CFT: vm_lowmem event handler patch for dirhash

Nick Barkas snb at
Tue Oct 14 16:53:53 UTC 2008

On Mon, Oct 13, 2008 at 2:22 PM, Ivan Voras <ivoras at> wrote:
> Nick Barkas wrote:
>> For more information and a bunch of graphs with results from my
>> benchmarking, take a look at
>> Also, I'll be giving a
>> talk about this project quite soon now at EuroBSDCon 2008.
> It's interesting to see that the 2 MB cache is sometimes a little bit
> faster than the 64 MB one (e.g. kernel build, svn operations, mail). Can
> you point to an explanation? A bad hash function? Bucket count too low?
> Experimental inaccuracy?

Yes, some of the benchmark results have been a bit surprising to me.
On 7.0, at least, the results seem pretty reasonable. The kernel build
and svn operations tests were faster with 2MB than 64MB of memory
without my vm_lowmem handler, or with the patch while using certain
reclaim age values that apparently were not so good. This makes sense
to me because, perhaps, these tasks can run faster when more memory is
available for things other than dirhash. In both of these cases, using
a 64MB limit for dirhash with the reclaim age at 5 seconds
outperformed the default 2MB limit on an unpatched kernel. Mail
creation is faster in all cases when there is a higher memory limit
for dirhash, presumably because this is a task (inserting files into
huge directories) that dirhash optimizes really well.

On -CURRENT things seem to make less sense, though. Both the kernel
build and svn operations are fastest when using 64MB of memory for
dirhash, with no vm_lowmem handler. Mail creation is surprisingly
fastest when using only a 2MB limit for dirhash, and slowest when
using 64MB on an unpatched kernel. This is pretty much the opposite of
what we see on 7.0. Using the kernel with the vm_lowmem handler
results in performance that is usually somewhere between the results
we get with the 2MB and 64MB unpatched kernel.

I don't have a very good theory to explain these results right now.
Most of the changes in the dirhash code between the 7 and 8 branches
involve differences in locking. It would probably be necessary to do
some profiling of the kernel and the benchmark processes both to get a
better idea of what's going on. Before I do that, though, I was hoping
to see what kind of results others may find using my code with a real
world application. It is certainly possible that my results are
strange simply because my tests are not so realistic :)


More information about the freebsd-fs mailing list