BSD 8.1 and 9.1 memory increase

Gumpula, Suresh Suresh.Gumpula at netapp.com
Wed Apr 1 16:06:08 UTC 2015


Thanks a lot for Conrad/Cooper.
I am seeing more physical memory footprint.   Looking at the top command
stats on an idle machine, there is an increase in active/inactive pages.
Then I looked at one of the huge applications we run(mgwd) , there is an
increase in ~15M vmem and ~35M of RSS.   Closely looking at its mapping
entries
With procstat -v , I see that  both resident(18209 to 23328) and private
resident(22827 to 26760) pages gone up for one of the libraries.
And similarly on boost atomic library too there is an increase in resident
pages.  This is observed for most of applications.

BSD 8.1 :

last pid:  5116;  load averages:  5.22,  4.29,  2.34    up 0+00:10:15
17:34:00
352 processes: 1 running, 350 sleeping, 1 zombie
CPU:  0.2% user,  0.0% nice,  4.5% system,  1.7% interrupt, 93.5% idle
Mem: 297M Active, 648M Inact, 139M Wired, 6948K Cache, 7520K Buf, 1862M
Free
Swap: 1536M Total, 1536M Free


PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
5112 diag 1 96 0 19012K 5668K CPU0 0 0:00 2.00% top
2219 root 68 96 0 859M 226M ucond 1 0:24 0.00% mgwd

% sudo procstat -v `pgrep mgwd`
PID START END PRT RES PRES REF SHD FL TP PATH
2213 0x800a46000 0x807b41000 r-x 18209 22827 2 1 CN vn
/usr/lib/libmgwd.so.1




BSD 9.1 :

last pid:  5344;  load averages:  5.17,  4.47,  2.79    up 0+00:26:12
17:22:57
39 processes:  1 running, 37 sleeping, 1 zombie
CPU:  0.2% user,  0.0% nice,  2.2% system,  0.6% interrupt, 97.0% idle
Mem: 338M Active, 669M Inact, 147M Wired, 392K Cache, 7488K Buf, 1799M Free
Swap: 1536M Total, 1536M Free


PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
5328 diag 1 40 0 17028K 5204K CPU1 0 0:00 0.40% top
2158 root 68 40 0 874M 262M uwait 1 0:23 0.00% mgwd

% sudo procstat -v `pgrep mgwd`
PID START END PRT RES PRES REF SHD FL TP PATH
2158 0x800a1c000 0x807c87000 r-x 23328 26760 2 1 CN‹  vn
/usr/lib/libmgwd.so.1





Thanks
Suresh




On 4/1/15, 8:34 AM, "Meyer, Conrad" <conrad.meyer at isilon.com> wrote:

>> On Mar 31, 2015, at 17:54, Gumpula, Suresh <Suresh.Gumpula at netapp.com>
>>wrote:
>>
>> Still trying to find out the reason for more memory foot print on 9.1
>> compared to 8.1 .
>> Does some thing like clustering changes in page fault handling cause
>> memory foot print ?
>> https://svnweb.freebsd.org/base?view=revision&revision=235876
>>
>> Copying Alan Cox , if could throw some inputs on this.
>
>Superpages and how FreeBSD does its best to put runtime libraries in
>superpage-able comes to mind..
>
>The VMEM for libraries is what caught us off guard last year when dealing
>with applications -- more libraries == greater footprint past either 8.0
>or 9.0 because of changes to VM/rtld.
>
>Conrad Meyer had a change out to reduce the footprint for libraries, but
>it was racy/incomplete unfortunately :/..
>
>Hope that maybe helps...
>
>-----------------------------------
>
>Right. So the linker and RTLD map each binary segment with 2MB virtual
>pages, because that way you only need one mapping / TLB entry per segment
>(or at least, up to 2MB... most libraries are much smaller than this).
>This is a performance optimization. The discussion around unmapping
>unused portions of the 2MB range can be found here:
>https://reviews.freebsd.org/D1263 .
>
>To summarize: larger-than-necessary superpage mappings affect only vmem
>accounting; actually use less resources (PTE's and any additional per-PTE
>vm accounting) than 4k pages; and use fewer TLB entries. Unmapping the
>unused portions is useless even if you get it right.
>
>Are you actually seeing greater memory footprint, or just greater vmem
>footprint? I don't actually use FreeBSD8 or 9.
>
>Cheers,
>Conrad



More information about the freebsd-hackers mailing list