Re: mmap( MAP_ANON) is broken on current. (was Still seeing Failed assertion: "p[i] == 0" on armv7 buildworld)
- Reply: Konstantin Belousov : "Re: mmap( MAP_ANON) is broken on current. (was Still seeing Failed assertion: "p[i] == 0" on armv7 buildworld)"
- In reply to: Konstantin Belousov : "Re: mmap( MAP_ANON) is broken on current. (was Still seeing Failed assertion: "p[i] == 0" on armv7 buildworld)"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sat, 22 Nov 2025 19:54:21 UTC
On 22.11.2025 19:45, Konstantin Belousov wrote:
> On Sat, Nov 22, 2025 at 07:01:03PM +0100, Michal Meloun wrote:
>>> Would you please gather the same ddebugging info, with this patch applied?
>> Oups, sorry.
>> In meantime, next round with he vm_map patch finished successfully.
>
> It was still the case of coalescing previous entry and the mapping.
>
> It is weird, the patch ensures that there is no pages in the object
> backing the new region, and due to the ensured properties of the object,
> there should be no way to create pages under us.
> I am almost sure that the provided patch is correct, but it might be
> some additional cases that I miss.
>
> Please apply the following debugging patch, it includes the vm_object'
> part. Instead of allowing the corruption in userspace, kernel should
> panic now. Can you confirm that?
>
> diff --git a/sys/vm/vm_map.c b/sys/vm/vm_map.c
> index 6b09552c5fee..76808b5ad7f1 100644
> --- a/sys/vm/vm_map.c
> +++ b/sys/vm/vm_map.c
> @@ -1743,6 +1743,27 @@ vm_map_insert1(vm_map_t map, vm_object_t object, vm_ooffset_t offset,
> (vm_size_t)(prev_entry->end - prev_entry->start),
> (vm_size_t)(end - prev_entry->end), cred != NULL &&
> (protoeflags & MAP_ENTRY_NEEDS_COPY) == 0)) {
> + vm_object_t obj = prev_entry->object.vm_object;
> + if (obj != NULL) {
> + struct pctrie_iter pages;
> + vm_page_t p;
> +
> + vm_page_iter_init(&pages, obj);
> + p = vm_radix_iter_lookup_ge(&pages,
> + OFF_TO_IDX(prev_entry->offset +
> + prev_entry->end - prev_entry->start));
> + if (p != NULL) {
> + KASSERT(p->pindex >= OFF_TO_IDX(prev_entry->offset +
> + prev_entry->end - prev_entry->start +
> + end - start),
> + ("FOUND page %p pindex %#jx "
> + "e %#jx %#jx %#jx %#jx",
> + p, p->pindex, (uintmax_t)prev_entry->offset,
> + (uintmax_t)prev_entry->end,
> + (uintmax_t)prev_entry->start,
> + (uintmax_t)(end - start)));
> + }
> + }
> /*
> * We were able to extend the object. Determine if we
> * can extend the previous map entry to include the
> diff --git a/sys/vm/vm_object.c b/sys/vm/vm_object.c
> index 5b4517d2bf0c..9bb4e54edd96 100644
> --- a/sys/vm/vm_object.c
> +++ b/sys/vm/vm_object.c
> @@ -2189,13 +2189,19 @@ vm_object_coalesce(vm_object_t prev_object, vm_ooffset_t prev_offset,
> next_size >>= PAGE_SHIFT;
> next_pindex = OFF_TO_IDX(prev_offset) + prev_size;
>
> - if (prev_object->ref_count > 1 &&
> - prev_object->size != next_pindex &&
> + if (prev_object->ref_count > 1 ||
> + prev_object->size != next_pindex ||
> (prev_object->flags & OBJ_ONEMAPPING) == 0) {
> VM_OBJECT_WUNLOCK(prev_object);
> return (FALSE);
> }
>
> + KASSERT(next_pindex + next_size > prev_object->size,
> + ("vm_object_coalesce: "
> + "obj %p next_pindex %#jx next_size %#jx obj_size %#jx",
> + prev_object, (uintmax_t)next_pindex, (uintmax_t)next_size,
> + (uintmax_t)prev_object->size));
> +
> /*
> * Account for the charge.
> */
> @@ -2222,26 +2228,13 @@ vm_object_coalesce(vm_object_t prev_object, vm_ooffset_t prev_offset,
> * Remove any pages that may still be in the object from a previous
> * deallocation.
> */
> - if (next_pindex < prev_object->size) {
> - vm_object_page_remove(prev_object, next_pindex, next_pindex +
> - next_size, 0);
> -#if 0
> - if (prev_object->cred != NULL) {
> - KASSERT(prev_object->charge >=
> - ptoa(prev_object->size - next_pindex),
> - ("object %p overcharged 1 %jx %jx", prev_object,
> - (uintmax_t)next_pindex, (uintmax_t)next_size));
> - prev_object->charge -= ptoa(prev_object->size -
> - next_pindex);
> - }
> -#endif
> - }
> + vm_object_page_remove(prev_object, next_pindex, next_pindex +
> + next_size, 0);
>
> /*
> * Extend the object if necessary.
> */
> - if (next_pindex + next_size > prev_object->size)
> - prev_object->size = next_pindex + next_size;
> + prev_object->size = next_pindex + next_size;
>
> VM_OBJECT_WUNLOCK(prev_object);
> return (TRUE);
Unfortunately, KASSERT doesn't assert on failure. Don't hit me, please. :)
Could this be related to the fact that the VMA has another region
immediately after added page?
__je_pages_map: addr: 0x0, ret: 0x34f6f000, size: 4096, alignment: 4096,
prot: 0x00000003, flags: 0x0C001002
__je_pages_map: i: 0, p[i]: 0xFFFFF000, p: 0x34f6f000
...
_3440 0x34f4e000 0x34f6e000 rw- 0 3 13 0 ----- sw
3440 0x34f6e000 0x34f70000 rw- 1 1 1 0 ----- sw
3440 0x34f70000 0x34f82000 rw- 0 3 13 0 ----- sw
...