kernel memory checks on boot vs. boot time

Bruce Evans brde at
Thu Mar 24 10:42:01 UTC 2011

On Wed, 23 Mar 2011, Peter Wemm wrote:

> On Wed, Mar 23, 2011 at 11:26 AM, John Baldwin <jhb at> wrote:
>> On Wednesday, March 23, 2011 1:14:43 pm Alexander Best wrote:
>>> or how about we dump the current memory checks, introduce a tunable and
>>> implement some *real* memory checks. as john pointed out the current checks
>>> are just rudimentary.
>> I think that doing *real* memory checks isn't really the role of our kernel.
>> Better effort would be spent on improving memtest86 since it is already trying
>> to solve this problem.

I agree.

> Part of the reason for this "check" is a sanity check to make sure we
> enumerated memory correctly and that we have at least got basic ram
> functionality.  The existence of hw.physmem complicates this.  On
> machines where hw.physmem could be used to tell the kernel that there
> was more ram present than the kernel enumerates (old bioses etc), this
> was kind of important to sanity check.

It seems to check just 1 word per page.  I think that's all it ever did.
So it is nothing like a memory test, but is a probe for the memory size.

> I'd kinda like to keep a basic "is this real, non mirrored ram?" test
> there.  eg: the 2-pass step of writing physical address into each page
> and then checking that they are still there on the second pass.

It's not a very sophisticated probe, but it does do this mirror check.
Or does it?  I can only see 1 pass, with writes of 0xaaaaaaa, 0x55555555,
0xffffffff, 0 and the original value to the single word tested.

The fact that this takes more than a few microseconds shows that memory
sizes are now _very_ large.  Perhaps the 4 test writes and overhead for
every page can be reduced.  The overhead includes a page table write and
an invtlb() for evey page the 4 test writes probably really do take only
a few microseconds for all of memory, but the invtlb() takes much longer.
It could at least be an invlpg() on all systems that can have much memory.
But if there is more virtual address space then memory (as on amd64?),
the probe can simply map all of memory and use a single invtlb().  Then
each set of memory accesses for each page should take about the same
time as a single access (for a cache miss).  Say 100 nsec per page.
With 128 GB, that is 3.56 seconds.  Still a bit too much, and a 2-pass
mirror test would double that by giving 2 cache misses per page.


More information about the freebsd-arch mailing list