svn commit: r269963 - head/sys/kern
Alan Cox
alc at rice.edu
Thu Aug 14 22:47:25 UTC 2014
On 08/14/2014 01:24, Xin Li wrote:
> On 8/13/14 10:35 PM, John-Mark Gurney wrote:
> > Xin LI wrote this message on Thu, Aug 14, 2014 at 05:13 +0000:
> >> Author: delphij Date: Thu Aug 14 05:13:24 2014 New Revision:
> >> 269963 URL: http://svnweb.freebsd.org/changeset/base/269963
> >>
> >> Log: Re-instate UMA cached backend for 4K - 64K allocations. New
> >> consumers like geli(4) uses malloc(9) to allocate temporary
> >> buffers that gets free'ed shortly, causing frequent TLB shootdown
> >> as observed in hwpmc supported flame graph.
>
> > Can we do even larger, like 128k for phys io sized blocks?
>
> Sure (Actually I'm running with 128k and 256k buckets enabled on my
> own storage box; with r269964 we can easily add new buckets without
> actually activating them by default).
>
> However, I'm relented to add them right now because the current
> malloc(9) implementation would use the next bucket size, which is 2x
> of the previous one, when the requested size is only a little bit
> larger than the smaller chunk's size. In real world the larger bucket
> could eat more memory than all smaller but greater than page-sized
> bucket combined (the actual consumption is still small, though).
>
The current code already supports sizes that are not powers of 2. For
example, with
Index: kern/kern_malloc.c
===================================================================
--- kern/kern_malloc.c (revision 269997)
+++ kern/kern_malloc.c (working copy)
@@ -152,8 +152,11 @@ struct {
{2048, "2048", },
{4096, "4096", },
{8192, "8192", },
+ {12228, "12228", },
{16384, "16384", },
+ {24576, "24576", },
{32768, "32768", },
+ {49152, "49152", },
{65536, "65536", },
{0, NULL},
};
I see
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
UMA Kegs: 384, 0, 94, 6, 94, 0, 0
...
16: 16, 0, 2501, 260, 36924, 0, 0
32: 32, 0, 2405, 470, 94881, 0, 0
64: 64, 0, 12480, 8042, 1365658, 0, 0
128: 128, 0, 12886, 26019, 211536, 0, 0
256: 256, 0, 5352, 2223, 463546, 0, 0
512: 512, 0, 2797, 7819, 46986, 0, 0
1024: 1024, 0, 70, 126, 89345, 0, 0
2048: 2048, 0, 2037, 1353, 168857, 0, 0
4096: 4096, 0, 289, 17, 108610, 0, 0
8192: 8192, 0, 26, 1, 323, 0, 0
12228: 12228, 0, 9, 0, 159, 0, 0
16384: 16384, 0, 4, 2, 97, 0, 0
24576: 24576, 0, 7, 2, 55, 0, 0
32768: 32768, 0, 1, 1, 34, 0, 0
49152: 49152, 0, 6, 1, 56, 0, 0
65536: 65536, 0, 8, 2, 784, 0, 0
after a few minutes of activity.
> I think eventually the right way to go is to adopt more sophisticated
> allocation strategy like the one used in jemalloc(3) and this
> changeset is more-or-less temporary for now: I committed it mainly
> because it eliminated a large portion of unwanted TLB shootdowns I
> have observed with very reasonable overhead (a few megabytes of RAM).
>
> > geli can do allocations >128k, which could be broken into two
> > parts, one in the <8k sized range and the other in 128k...
>
> Yes, this is another issue that I'd like to solve.
>
>
More information about the svn-src-head
mailing list