Observations on virtual memory operations

Pete Wright pete at nomadlogic.org
Tue Dec 29 18:09:54 UTC 2020



On 12/29/20 9:40 AM, doug at safeport.com wrote:
> On Tue, 29 Dec 2020, Michael Schuster wrote:
>
>> On Tue, Dec 29, 2020, 00:37 Pete Wright <pete at nomadlogic.org> wrote:
>>
>>>
>>>
>>> On 12/28/20 3:25 PM, doug wrote:
>>>> I have two servers running jails that "routinely" run out of swapspace
>>>> with
>>>> no demand paging activity. To try and get a handle on VM/swapspace
>>>> management I have been tracking swapinfo vs memory use as measured by
>>>> top.
>>>> The numbers do not exactly add up but I assume that is not involved 
>>>> in my
>>>> issue.
>>>>
>>> <snip>
>>>>
>>>> The other day I caught the system at 73% swapspace used. At this level
>>>> the
>>>> system was in a near thrashing state in that typing a key got it
>>>> echoed in
>>>> 10 <--> 30 seconds. There was about 600MB of swapspace at this 
>>>> point. I
>>>> would think there is no way to debug this except as a thought 
>>>> experiment.
>>>
>>> The first thing that comes to mind is do you have the ability to hook
>>> any metrics/monitoring onto this system.  For example, I use 
>>> collectd on
>>> my systems to report overall CPU/memory metrics as well as per-process
>>> memory metrics.
>>>
>>> Alternatively you could write a simple shell script that run's "ps" and
>>> parses the output of memory utilization on a per-process basis.
>>>
>>> either of the above approaches should give you some insight into where
>>> the memory leak is coming from (assuming you already do not know).
>>>
>>> one trick i use is to invoke a process with "limits" to ensure it does
>>> not exceed a certain amount of memory that I allocate to it. for 
>>> example
>>> with firefox i do this:
>>> $ limits -m 6g -v 6g /usr/local/bin/firefox
>>>
>>> that should at least buy you enough time to investigate why the process
>>> needs so much memory and see what you can do about it.
>>
> Thank you all for the information and thoughts. If vmstat produces 
> correct infomation there is no demand paging. The limiting condition 
> on these systems is swapfile space rather than real memory. There are 
> 69 sysctl elements dealing with paging and swapfile. If there is 
> documentation (other than in C) on these that would be helpful 
> perhaps. Most are totals, demand paging rates may be in this set, but 
> not so as I can tell.
>
> The one time I caught the system dying the limiting resource was 
> swapspace. There was no paging (last vmstat) and about 670MB left in 
> the swapfile. In this state I could recover by restarting apache.

I wouldn't go down that rabbit hole just yet.  If the issue is with 
apache-httpd causing your memory to run away I would instead focus on 
trying to determine *why* httpd is doing that.  Generally a well behaved 
process should not need to page out to disk if the system is 
appropriately sized and configured.  As such I would suggest starting at 
the application layer before trying to tweak how FreeBSD manages paging 
out to disk.

For example, I remember issues back in the day where httpd would consume 
tons of memory if people were uploading files.  We were able to address 
this by being more aggressive in how we wrote files to disk in chunks 
during the upload process.

-p

-- 
Pete Wright
pete at nomadlogic.org
@nomadlogicLA



More information about the freebsd-questions mailing list