limit on PV entries

Conrad J. Sabatier conrads at cox.net
Thu Feb 2 07:38:47 UTC 2012


On Wed, 1 Feb 2012 10:17:21 +0100
n dhert <ndhertbsd at gmail.com> wrote:

> FreeBSD 8.2-RELEASE
> >From time to time, I get in /var/log/messages
>  kernel: Approaching the limit on PV entries, consider increasing
> either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
> 
> this started a few weeks ago, never had that before, don't have it on
> other FreeBSD 8.2-RELEASE systems.
> 
> - What does this mean?
> - And how to increase either of the two and to what level ?
> 
> $ sysctl vm.pmap.shpgperproc
> vm.pmap.shpgperproc: 200
> $ sysctl vm.pmap.pv_entry_max
> vm.pmap.pv_entry_max: 3256966

If you want to know the meaning of a particular sysctl, then 'sysctl
-d' is your friend:

# sysctl -d vm.pmap.shpgperproc
vm.pmap.shpgperproc: Page share factor per proc
# sysctl -d vm.pmap.pv_entry_maxvm.pmap.pv_entry_max
vm.pmap.pv_entry_max: Max number of PV entries

(OK, granted, you may wish to know more than is provided by just these
terse single-line entries, but it's a start along the clue path, at
least)

Your mileage may vary, but for me personally, I like to have *some*
sense that I'm approaching this sort of problem in an organized,
methodical manner, rather than simply pulling numbers out of a hat, so
to speak.  So, since computers are based on the binary number system, I
naturally gravitate towards numbers that are nice, neat multiples of
some power of 2, such as 64, 128, 256, 512, 1024, 2048 and so on.

There's really no hard-and-fast rule for determining a good setting for
many of these numeric-type sysctls.  The method I generally use is:

First, try to find a similarly named sysctl within that same hierarchy
that represents the current working value for the sysctl in question,
by lopping off the last element of the sysctl and scanning the results,
using 'sysctl -d' again on the most likely candidate(s) to be sure.
For instance, since we're dealing here with two members of the vm.pmap
family:

# sysctl vm.pmap

vm.pmap.pat_works: 1
vm.pmap.pg_ps_enabled: 1
vm.pmap.pv_entry_max: 7573622
vm.pmap.shpgperproc: 1024
vm.pmap.pde.demotions: 14959
vm.pmap.pde.mappings: 1055
vm.pmap.pde.p_failures: 255640
vm.pmap.pde.promotions: 23441
vm.pmap.pdpe.demotions: 5
vm.pmap.pv_entry_count: 270500 <--- this looks like a likely candidate
vm.pmap.pc_chunk_count: 2590
vm.pmap.pc_chunk_allocs: 1551162
vm.pmap.pc_chunk_frees: 1548572
vm.pmap.pc_chunk_tryfail: 0
vm.pmap.pv_entry_frees: 244822815
vm.pmap.pv_entry_allocs: 245093315
vm.pmap.pv_entry_spare: 164620
vm.pmap.pmap_collect_inactive: 0
vm.pmap.pmap_collect_active: 0

# sysctl -d vm.pmap.pv_entry_count
vm.pmap.pv_entry_count: Current number of pv entries

Comparing the current working value to the defined maximum value here,
it's pretty obvious that the problem doesn't lie with
vm.pmap.pv_entry_max, so it's more likely that (were I experiencing a
problem) the value of vm.pmap.shpgperproc needs to be increased.

How I choose the value also depends on what the number represents.  If
it's determining the size of a buffer or some other chunk of memory, I
bump the value up to the next (or second) nearest multiple of, again,
some reasonable power of 2 (64, 128, 256, 512, 1024), since computers
"like" such nice, "neat" numbers.

In the case of items that set a limit on the number of entries in some
sort of array, such as vm.pmap.pv_entry_max, I usually just double the
default setting and see how that works, perhaps backing it off a bit
later by some "sensible" factor.

If the problem persists, just bump it up again by a similar factor
until it goes away.

You may, in time, devise your own methodology, but the one I just
described is "comfortable" for me, and does feel a lot better than just
haphazardly plugging in random numbers and hoping for the best.

HTH

-- 
Conrad J. Sabatier
conrads at cox.net


More information about the freebsd-questions mailing list