svn commit: r351673 - in head: lib/libmemstat share/man/man9 sys/cddl/compat/opensolaris/kern sys/kern sys/vm
Slawa Olhovchenkov
slw at zxy.spb.ru
Tue Sep 3 16:14:37 UTC 2019
On Tue, Sep 03, 2019 at 10:02:59AM +0300, Andriy Gapon wrote:
> On 02/09/2019 01:22, Mark Johnston wrote:
> > Author: markj
> > Date: Sun Sep 1 22:22:43 2019
> > New Revision: 351673
> > URL: https://svnweb.freebsd.org/changeset/base/351673
> >
> > Log:
> > Extend uma_reclaim() to permit different reclamation targets.
> >
> > The page daemon periodically invokes uma_reclaim() to reclaim cached
> > items from each zone when the system is under memory pressure. This
> > is important since the size of these caches is unbounded by default.
> > However it also results in bursts of high latency when allocating from
> > heavily used zones as threads miss in the per-CPU caches and must
> > access the keg in order to allocate new items.
> >
> > With r340405 we maintain an estimate of each zone's usage of its
> > (per-NUMA domain) cache of full buckets. Start making use of this
> > estimate to avoid reclaiming the entire cache when under memory
> > pressure. In particular, introduce TRIM, DRAIN and DRAIN_CPU
> > verbs for uma_reclaim() and uma_zone_reclaim(). When trimming, only
> > items in excess of the estimate are reclaimed. Draining a zone
> > reclaims all of the cached full buckets (the previous behaviour of
> > uma_reclaim()), and may further drain the per-CPU caches in extreme
> > cases.
> >
> > Now, when under memory pressure, the page daemon will trim zones
> > rather than draining them. As a result, heavily used zones do not incur
> > bursts of bucket cache misses following reclamation, but large, unused
> > caches will be reclaimed as before.
>
> Mark,
>
> have you considered running UMA_RECLAIM_TRIM periodically, even without a memory
> pressure?
> I think that with such a periodic trimming there will be less need to invoke
> vm_lowmem().
>
> Also, I think that we would be able to retire (or re-purpose) lowmem_period.
> E.g., the trimming would be done every lowmem_period, but vm_lowmem() would not
> be throttled.
>
> One example of the throttling of vm_lowmem being bad is its interaction with the
> ZFS ARC. When there is a spike in memory usage we want the ARC to adapt as
> quickly as possible. But at present the lowmem_period logic interferes with that.
Some time ago, I sent Mark a patch that implements this logic,
specialy for ARC and mbuf cooperate. Mostly problem I am see at this
work -- very slowly vm_page_free(). May be currenly this is more
speedy...
More information about the svn-src-all
mailing list