expanding past 1 TB on amd64
chris.torek at gmail.com
Wed Jun 19 08:32:36 UTC 2013
In src/sys/amd64/include/vmparam.h is this handy map:
* 0x0000000000000000 - 0x00007fffffffffff user map
* 0x0000800000000000 - 0xffff7fffffffffff does not exist (hole)
* 0xffff800000000000 - 0xffff804020100fff recursive page table (512GB slot)
* 0xffff804020101000 - 0xfffffdffffffffff unused
* 0xfffffe0000000000 - 0xfffffeffffffffff 1TB direct map
* 0xffffff0000000000 - 0xffffff7fffffffff unused
* 0xffffff8000000000 - 0xffffffffffffffff 512GB kernel map
showing that the system can deal with at most 1 TB of address space
(because of the direct map), using at most half of that for kernel
memory (less, really, due to the inevitable VM fragmentation).
New boards are coming soonish that will have the ability to go
past that (24 DIMMs of 64 GB each = 1.5 TB). Or, if some crazy
people :-) might want to use a most of a 768 GB board (24 DIMMs of
32 GB each, possible today although the price is kind of
staggering) as wired-down kernel memory, the 512 GB VM area is
already a problem.
I have not wrapped my head around the amd64 pmap code but figured
I'd ask: what might need to change to support larger spaces?
Obviously NKPML4E in amd64/include/pmap.h, for the kernel start
address; and NDMPML4E for the direct map. It looks like this
would adjust KERNBASE and the direct map appropriately. But would
that suffice, or have I missed something?
For that matter, if these are changed to make space for future
expansion, what would be a good expansion size? Perhaps multiply
the sizes by 16? (If memory doubles roughly every 18 months,
that should give room for at least 5 years.)
More information about the freebsd-current