FreeBSD 8.2 - active plus inactive memory leak!?
Konstantin Belousov
kostikbel at gmail.com
Wed Mar 7 09:31:26 UTC 2012
On Wed, Mar 07, 2012 at 09:26:06AM +0000, Luke Marsden wrote:
> On Wed, 2012-03-07 at 10:23 +0200, Konstantin Belousov wrote:
> > On Wed, Mar 07, 2012 at 12:36:21AM +0000, Luke Marsden wrote:
> > > I'm trying to confirm that, on a system with no pages swapped out, that
> > > the following is a true statement:
> > >
> > > a page is accounted for in active + inactive if and only if it
> > > corresponds to one or more of the pages accounted for in the
> > > resident memory lists of all the processes on the system (as per
> > > the output of 'top' and 'ps')
> > No.
> >
> > The pages belonging to vnode vm object can be active or inactive or cached
> > but not mapped into any process address space.
>
> Thank you, Konstantin. Does the number of vnodes we've got open on this
> machine (272011) fully explain away the memory gap?
>
> Memory gap:
> 11264M active + 2598M inactive - 9297M sum-of-resident = 4565M
>
> Active vnodes:
> vfs.numvnodes: 272011
>
> That gives a lower bound at 17.18Kb per vode (or higher if we take into
> account shared libs, etc); that seems a bit high for a vnode vm object
> doesn't it?
Vnode vm object keeps the set of pages belonging to the vnode. There is
nothing bad (or good) there.
>
> If that doesn't fully explain it, what else might be chewing through
> active memory?
>
> Also, when are vnodes freed?
>
> This system does have some tuning...
> kern.maxfiles: 1000000
> vm.pmap.pv_entry_max: 73296250
>
> Could that be contributing to so much active + inactive memory (5GB+
> more than expected), or do PV entries live in wired e.g. kernel memory?
pv entries are accounted as wired memory.
>
>
> On Tue, 2012-03-06 at 17:48 -0700, Ian Lepore wrote:
> > In my experience, the bulk of the memory in the inactive category is
> > cached disk blocks, at least for ufs (I think zfs does things
> > differently). On this desktop machine I have 12G physical and
> > typically have roughly 11G inactive, and I can unmount one particular
> > filesystem where most of my work is done and instantly I have almost
> > no inactive and roughly 11G free.
>
> Okay, so this could be UFS disk cache, except the system is ZFS-on-root
> with no UFS filesystems active or mounted. Can I confirm that no
> double-caching of ZFS data is happening in active + inactive (+ cache)
> memory?
ZFS double-buffers the mmaped files.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 196 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20120307/8c76278b/attachment.pgp
More information about the freebsd-fs
mailing list