git: 088dd40169b1 - stable/14 - vm_phys_add_seg(): Check for bad segments, allow empty ones
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 08 Apr 2025 13:40:38 UTC
The branch stable/14 has been updated by olce:
URL: https://cgit.FreeBSD.org/src/commit/?id=088dd40169b1186bd09164daea10e26bdb833eee
commit 088dd40169b1186bd09164daea10e26bdb833eee
Author: Olivier Certner <olce@FreeBSD.org>
AuthorDate: 2024-10-09 17:04:34 +0000
Commit: Olivier Certner <olce@FreeBSD.org>
CommitDate: 2025-04-08 13:38:21 +0000
vm_phys_add_seg(): Check for bad segments, allow empty ones
A bad specification is if 'start' is strictly greater than 'end', or
bounds are not page aligned.
The latter was already tested under INVARIANTS, but now will be also on
production kernels. The reason is that vm_phys_early_startup() pours
early segments into the final phys_segs[] array via vm_phys_add_seg(),
but vm_phys_early_add_seg() did not check their validity. Checking
segments once and for all in vm_phys_add_seg() avoids duplicating
validity tests and is possible since early segments are not used before
being poured into phys_segs[]. Finally, vm_phys_add_seg() is not
performance critical.
Allow empty segments and discard them (silently, unless 'bootverbose' is
true), as vm_page_startup() was testing for this case before calling
vm_phys_add_seg(), and we felt the same test in vm_phys_early_startup()
was due before calling vm_phys_add_seg(). As a consequence, remove the
empty segment test from vm_page_startup().
Reviewed by: markj
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D48627
(cherry picked from commit f30309abcce4cec891413da5cba2db92dd6ab0d7)
---
sys/vm/vm_page.c | 3 +--
sys/vm/vm_phys.c | 16 ++++++++++++----
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c
index 7eebf30e19a7..49628b94b12b 100644
--- a/sys/vm/vm_page.c
+++ b/sys/vm/vm_page.c
@@ -746,8 +746,7 @@ vm_page_startup(vm_offset_t vaddr)
* physical pages.
*/
for (i = 0; phys_avail[i + 1] != 0; i += 2)
- if (vm_phys_avail_size(i) != 0)
- vm_phys_add_seg(phys_avail[i], phys_avail[i + 1]);
+ vm_phys_add_seg(phys_avail[i], phys_avail[i + 1]);
/*
* Initialize the physical memory allocator.
diff --git a/sys/vm/vm_phys.c b/sys/vm/vm_phys.c
index 83038b2af0ea..98ea22fd2b9d 100644
--- a/sys/vm/vm_phys.c
+++ b/sys/vm/vm_phys.c
@@ -458,10 +458,18 @@ vm_phys_add_seg(vm_paddr_t start, vm_paddr_t end)
{
vm_paddr_t paddr;
- KASSERT((start & PAGE_MASK) == 0,
- ("vm_phys_define_seg: start is not page aligned"));
- KASSERT((end & PAGE_MASK) == 0,
- ("vm_phys_define_seg: end is not page aligned"));
+ if ((start & PAGE_MASK) != 0)
+ panic("%s: start (%jx) is not page aligned", __func__,
+ (uintmax_t)start);
+ if ((end & PAGE_MASK) != 0)
+ panic("%s: end (%jx) is not page aligned", __func__,
+ (uintmax_t)end);
+ if (start > end)
+ panic("%s: start (%jx) > end (%jx)!", __func__,
+ (uintmax_t)start, (uintmax_t)end);
+
+ if (start == end)
+ return;
/*
* Split the physical memory segment if it spans two or more free