svn commit: r251894 - in head: lib/libmemstat sys/vm

Jeff Roberson jroberson at jroberson.net
Tue Jun 18 21:29:07 UTC 2013


On Tue, 18 Jun 2013, Alfred Perlstein wrote:

> On 6/18/13 4:37 AM, Gleb Smirnoff wrote:
>> On Tue, Jun 18, 2013 at 10:25:08AM +0200, Andre Oppermann wrote:
>> A> There used to be a problem with per CPU caches accumulating large 
>> amounts
>> A> of items without freeing back to the global (or socket) pool.
>> A>
>> A> Do these updates to UMA change this situation and/or do you have further
>> A> improvements coming up?
>> 
>> This is especially a problem with ZFS, which utilizes UMA extensively.
>> 
>> IMHO, we need a flag for uma_zcreate() that would disable per CPU caches, 
>> so
>> that certain zones (ZFS at least) would have them off.
>> 
>> It might be a good idea to force this flag on every zone that has 
>> allocation >=
>> then the page size.
>> 
> What about people running with 256GB+ ram?  Do they also want the per cpu 
> caches off?

If you look at the new system there is a static threshold for the initial 
item size required for different sized per-cpu buckets.  What might make 
sense is to tune this size based on available memory.  For what it's worth 
I looked at solaris settings and they cache roughly 4x as much on a 
per-cpu basis.

The new system should tend to cache less of large and infrequent 
allocations vs the old system.  I can't say yet whether it is still a 
problem.

I have an implementation of vmem to replace using vm_maps for kmem_map, 
buffer_map, etc. which may resolve the zfs allocation problems.  I hope to 
get this in over the next few weeks.

Thanks,
Jeff


>
> -- 
> Alfred Perlstein
> VP Software Engineering, iXsystems
>


More information about the svn-src-head mailing list