svn commit: r367165 - head/sys/fs/tmpfs

Mateusz Guzik mjguzik at gmail.com
Fri Oct 30 16:19:19 UTC 2020


On 10/30/20, Konstantin Belousov <kostikbel at gmail.com> wrote:
> On Fri, Oct 30, 2020 at 04:42:39PM +0100, Mateusz Guzik wrote:
>> On 10/30/20, Konstantin Belousov <kostikbel at gmail.com> wrote:
>> > On Fri, Oct 30, 2020 at 03:08:32PM +0100, Mateusz Guzik wrote:
>> >> On 10/30/20, Mateusz Guzik <mjg at freebsd.org> wrote:
>> >> > Author: mjg
>> >> > Date: Fri Oct 30 14:07:25 2020
>> >> > New Revision: 367165
>> >> > URL: https://svnweb.freebsd.org/changeset/base/367165
>> >> >
>> >> > Log:
>> >> >   tmpfs: change tmpfs dirent zone into a malloc type
>> >> >
>> >> >   It is 64 bytes.
>> >> >
>> >>
>> >> Right now malloc has only power-of-2 zones but I'm looking into
>> >> changing that. The allocator itself trivially extends to multiply of
>> >> 16, but stat collection needs reworking.
>> >
>> > Either commit message or follow-up do not explain why stopping using
>> > zone for dirents is useful.  Intuition says exactly reverse, dirents
>> > on tmpfs are allocation-intensive typically.
>> >
>>
>> Off hand the only reasons to use a dedicated zones that I see are:
>> - use of any of the routines on object creation/destruction
>> - non-standard flags like NOFREE
>> - SMR
>> - high expected allocated count with sizes poorly fit for malloc
> - Visibility of allocation rate and memory consumption

This is tracked as reported by vmstat -m.

> - Detection of leak on zone destruction (tmpfs unmount)

Zones stopped being per-mount, so now it would be on tmpfs unload.
Even then, this once more is provided thanks to malloc types -- there
is per-cpu tracking of all allocations within given type and it is
validated to be 0 on type destruction. Iow I don't see a loss in
functionality.

>>
>> Since malloc itself is implemented on top of zones, the difference
>> before/after the patch is that now it can re-use the pre-existing 64
>> byte buckets instead of creating its own copy.
>>
>> The above follow up was to address potential comments about the size
>> changing from 64 -- with better malloc granularity this wont be a big
>> deal. Also note tmpfs already uses malloc to store names.
> Is it 64 on all arches, on only on LP64 ?  I think the later, and then
> this additionally hits 32bit arches.
>

I did not check 32-bit. With more granular malloc it is going to be
once more exact fit or close it. Either way, with more granular malloc
and more zones moved there the total memory use should go down thanks
to avoiding spurious per-cpu buckets.

>>
>> If anything in my opinion the kernel has unnecessary zones (like the
>> vnode poll one I patched prior to this).
> For this one I agree, it is low-profile alloc type.
>


-- 
Mateusz Guzik <mjguzik gmail.com>


More information about the svn-src-all mailing list