git: 103763fab4f7 - stable/14 - vm_pageout_scan_inactive: take a lock break
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Thu, 10 Jul 2025 21:04:13 UTC
The branch stable/14 has been updated by markj:
URL: https://cgit.FreeBSD.org/src/commit/?id=103763fab4f7349df53f7367816f1f4ca2881005
commit 103763fab4f7349df53f7367816f1f4ca2881005
Author: Ryan Libby <rlibby@FreeBSD.org>
AuthorDate: 2024-05-24 15:52:58 +0000
Commit: Mark Johnston <markj@FreeBSD.org>
CommitDate: 2025-07-10 21:00:34 +0000
vm_pageout_scan_inactive: take a lock break
In vm_pageout_scan_inactive, release the object lock when we go to
refill the scan batch queue so that someone else has a chance to acquire
it. This improves access latency to the object when the pagedaemon is
processing many consecutive pages from a single object, and also in any
case avoids a hiccup during refill for the last touched object.
Reviewed by: alc, markj (previous version)
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D45288
(cherry picked from commit a216e311a70cc87a5646f4306e36c60a51706699)
---
sys/vm/vm_pageout.c | 16 +++++++++++++++-
sys/vm/vm_pagequeue.h | 6 ++++++
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c
index c2c5281b87af..83f655eb852e 100644
--- a/sys/vm/vm_pageout.c
+++ b/sys/vm/vm_pageout.c
@@ -1468,7 +1468,21 @@ vm_pageout_scan_inactive(struct vm_domain *vmd, int page_shortage)
pq = &vmd->vmd_pagequeues[PQ_INACTIVE];
vm_pagequeue_lock(pq);
vm_pageout_init_scan(&ss, pq, marker, NULL, pq->pq_cnt);
- while (page_shortage > 0 && (m = vm_pageout_next(&ss, true)) != NULL) {
+ while (page_shortage > 0) {
+ /*
+ * If we need to refill the scan batch queue, release any
+ * optimistically held object lock. This gives someone else a
+ * chance to grab the lock, and also avoids holding it while we
+ * do unrelated work.
+ */
+ if (object != NULL && vm_batchqueue_empty(&ss.bq)) {
+ VM_OBJECT_WUNLOCK(object);
+ object = NULL;
+ }
+
+ m = vm_pageout_next(&ss, true);
+ if (m == NULL)
+ break;
KASSERT((m->flags & PG_MARKER) == 0,
("marker page %p was dequeued", m));
diff --git a/sys/vm/vm_pagequeue.h b/sys/vm/vm_pagequeue.h
index 70122fef9fff..43cb67a252b5 100644
--- a/sys/vm/vm_pagequeue.h
+++ b/sys/vm/vm_pagequeue.h
@@ -357,6 +357,12 @@ vm_batchqueue_init(struct vm_batchqueue *bq)
bq->bq_cnt = 0;
}
+static inline bool
+vm_batchqueue_empty(const struct vm_batchqueue *bq)
+{
+ return (bq->bq_cnt == 0);
+}
+
static inline int
vm_batchqueue_insert(struct vm_batchqueue *bq, vm_page_t m)
{