vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB)

Gezeala M. Bacuño II gezeala at gmail.com
Mon Aug 27 22:23:27 UTC 2012


On Thu, Aug 23, 2012 at 12:02 PM, Alan Cox <alc at rice.edu> wrote:
> On 08/22/2012 12:09, Gezeala M. Bacuño II wrote:
>>
>> On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox<alc at rice.edu>  wrote:
>>>
>>> On 8/20/2012 8:26 PM, Gezeala M. Bacuño II wrote:
>>>>
>>>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacuño II<gezeala at gmail.com>
>>>> wrote:
>>>>>
>>>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox<alc at rice.edu>  wrote:
>>>>>>
>>>>>> On 08/18/2012 19:57, Gezeala M. Bacuño II wrote:
>>>>>>>
>>>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox<alc at rice.edu>   wrote:
>>>>>>>>
>>>>>>>> On 08/17/2012 17:08, Gezeala M. Bacuño II wrote:
>>>>>>>>>
>>>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox<alc at rice.edu>    wrote:
>>>>>>>>>>
>>>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap, i.e.,
>>>>>>>>>> the
>>>>>>>>>> region where the kernel's slab and malloc()-like memory allocators
>>>>>>>>>> obtain
>>>>>>>>>> their memory.  While this heap may occupy the largest portion of
>>>>>>>>>> the
>>>>>>>>>> kernel's virtual address space, it cannot occupy the entirety of
>>>>>>>>>> the
>>>>>>>>>> address
>>>>>>>>>> space.  There are other things that must be given space within the
>>>>>>>>>> kernel's
>>>>>>>>>> address space, for example, the file system buffer map.
>>>>>>>>>>
>>>>>>>>>> ZFS does not, however, use the regular file system buffer cache.
>>>>>>>>>> The
>>>>>>>>>> ARC
>>>>>>>>>> takes its place, and the ARC abuses the kernel's heap like nothing
>>>>>>>>>> else.
>>>>>>>>>> So, if you are running a machine that only makes trivial use of a
>>>>>>>>>> non-ZFS
>>>>>>>>>> file system, like you boot from UFS, but store all of your data in
>>>>>>>>>> ZFS,
>>>>>>>>>> then
>>>>>>>>>> you can dramatically reduce the size of the buffer map via boot
>>>>>>>>>> loader
>>>>>>>>>> tuneables and proportionately increase vm.kmem_size.
>>>>>>>>>>
>>>>>>>>>> Any further increases in the kernel virtual address space size
>>>>>>>>>> will,
>>>>>>>>>> however, require code changes.  Small changes, but changes
>>>>>>>>>> nonetheless.
>>>>>>>>>>
>>>>>>>>>> Alan
>>>>>>>>>>
>>>>> <<snip>>
>>>>>>>>
>>>>>>>> Your objective should be to reduce the value of "sysctl
>>>>>>>> vfs.maxbufspace".
>>>>>>>> You can do this by setting the loader.conf tuneable "kern.maxbcache"
>>>>>>>> to
>>>>>>>> the
>>>>>>>> desired value.
>>>>>>>>
>>>>>>>> What does your machine currently report for "sysctl
>>>>>>>> vfs.maxbufspace"?
>>>>>>>>
>>>>>>> Here you go:
>>>>>>> vfs.maxbufspace: 54967025664
>>>>>>> kern.maxbcache: 0
>>>>>>
>>>>>>
>>>>>> Try setting kern.maxbcache to two billion and adding 50 billion to the
>>>>>> setting of vm.kmem_size{,_max}.
>>>>>>
>>>> 2 : 50 ==>>  is this the ratio for further tuning
>>>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes?
>>>>
>>> No, this is not a ratio.  Yes, kern.maxbcache is in bytes. Basically, for
>>> every byte that you subtract from vfs.maxbufspace, through setting
>>> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}.
>>>
>>> Alan
>>>
>> Great! Thanks. Are there other sysctls aside from vfs.bufspace that I
>> should monitor for vfs.maxbufspace usage? I just want to make sure
>> that vfs.maxbufspace is sufficient for our needs.
>
>
> You might keep an eye on "sysctl vfs.bufdefragcnt".  If it starts rapidly
> increasing, you may want to increase vfs.maxbufspace.
>
> Alan
>

We seem to max out vfs.bufspace in <24hrs uptime. It has been steady
at 1999273984 while vfs.bufdefragcnt stays at 0 - which I presume is
good. Nevertheless, I will increase kern.maxbcache to 6GB and adjust
vm.kmem_size{,_max}, vfs.zfs.arc_max accordingly. On another machine
with vfs.maxbufspace auto-tuned to 7738671104 (~7.2GB), vfs.bufspace
is now at 5278597120 (uptime 129 days).

vfs.maxbufspace: 1999994880
kern.maxbcache: 2000000000
vfs.hirunningspace: 16777216
vfs.lorunningspace: 11206656
vfs.bufdefragcnt: 0
vfs.buffreekvacnt: 59
vfs.bufreusecnt: 61075
vfs.hibufspace: 1999339520
vfs.lobufspace: 1999273984
vfs.maxmallocbufspace: 99966976
vfs.bufmallocspace: 0
vfs.bufspace: 1999273984
vfs.runningbufspace: 0
vfs.numdirtybuffers: 2
vfs.lodirtybuffers: 15268
vfs.hidirtybuffers: 30537
vfs.dirtybufthresh: 27483
vfs.numfreebuffers: 122068
vfs.getnewbufcalls: 1159148


More information about the freebsd-performance mailing list