zfs + uma
avg at freebsd.org
Wed Sep 22 07:25:29 UTC 2010
on 21/09/2010 19:16 Alan Cox said the following:
> Actually, I think that there is a middle ground between "per-cpu caches" and
> "directly from the VM" that we are missing. When I've looked at the default
> configuration of ZFS (without the extra UMA zones enabled), there is an
> incredible amount of churn on the kmem map caused by the implementation of
> uma_large_malloc() and uma_large_free() going directly to the kmem map. Not
> only are the obvious things happening, like allocating and freeing kernel
> virtual addresses and underlying physical pages on every call, but also
> system-wide TLB shootdowns and sometimes superpage demotions are occurring.
> I have some trouble believing that the large allocations being performed by ZFS
> really need per-CPU caching, but I can certainly believe that they could benefit
> from not going directly to the kmem map on every uma_large_malloc() and
> uma_large_free(). In other words, I think it would make a lot of sense to have
> a thin layer between UMA and the kmem map that caches allocated but unused
> ranges of pages.
thank you very much for the testing and analysis.
These are very good points.
So, for the reference, here are two patches that I came up with:
1. original patch that attempts to implement Solaris-like behavior but doesn't
go all the way to disabling per-CPU caches:
2. patch that attempts to implement Jeff's three suggestions; I've tested
per-CPU cache size adaptive behavior, works well, but haven't tested per-CPU
cache draining yet:
More information about the freebsd-hackers