git: 03984bdfa033 - stable/13 - vm: Round up npages and alignment for contig reclamation
Mark Johnston
markj at FreeBSD.org
Tue Mar 16 15:14:28 UTC 2021
The branch stable/13 has been updated by markj:
URL: https://cgit.FreeBSD.org/src/commit/?id=03984bdfa033efe0597aa6adac85dba6e7ddb9a8
commit 03984bdfa033efe0597aa6adac85dba6e7ddb9a8
Author: Mark Johnston <markj at FreeBSD.org>
AuthorDate: 2021-03-02 15:19:53 +0000
Commit: Mark Johnston <markj at FreeBSD.org>
CommitDate: 2021-03-16 15:14:09 +0000
vm: Round up npages and alignment for contig reclamation
When searching for runs to reclaim, we need to ensure that the entire
run will be added to the buddy allocator as a single unit. Otherwise,
it will not be visible to vm_phys_alloc_contig() as it is currently
implemented. This is a problem for allocation requests that are not a
power of 2 in size, as with 9KB jumbo mbuf clusters.
Reported by: alc
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D28924
(cherry picked from commit 0401989282d1bb9972ae2bf4862c2c6c92ae5f27)
---
sys/vm/vm_page.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c
index c36b8cdc5762..da62e6795c81 100644
--- a/sys/vm/vm_page.c
+++ b/sys/vm/vm_page.c
@@ -2972,17 +2972,29 @@ vm_page_reclaim_contig_domain(int domain, int req, u_long npages,
struct vm_domain *vmd;
vm_paddr_t curr_low;
vm_page_t m_run, m_runs[NRUNS];
- u_long count, reclaimed;
+ u_long count, minalign, reclaimed;
int error, i, options, req_class;
KASSERT(npages > 0, ("npages is 0"));
KASSERT(powerof2(alignment), ("alignment is not a power of 2"));
KASSERT(powerof2(boundary), ("boundary is not a power of 2"));
- req_class = req & VM_ALLOC_CLASS_MASK;
+
+ /*
+ * The caller will attempt an allocation after some runs have been
+ * reclaimed and added to the vm_phys buddy lists. Due to limitations
+ * of vm_phys_alloc_contig(), round up the requested length to the next
+ * power of two or maximum chunk size, and ensure that each run is
+ * suitably aligned.
+ */
+ minalign = 1ul << imin(flsl(npages - 1), VM_NFREEORDER - 1);
+ npages = roundup2(npages, minalign);
+ if (alignment < ptoa(minalign))
+ alignment = ptoa(minalign);
/*
* The page daemon is allowed to dig deeper into the free page list.
*/
+ req_class = req & VM_ALLOC_CLASS_MASK;
if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
req_class = VM_ALLOC_SYSTEM;
More information about the dev-commits-src-branches
mailing list