ZFS repeatable reboot 8.0-RC1
grarpamp at gmail.com
Thu Oct 15 22:15:20 UTC 2009
Note: I tried setting vfs.zfs.arc_max=32M and zfs mem usage still
grew past its limits and the machine rebooted.
Forwarding a note I received...
>> Your machine is starving!
> How can this be, there is over 500MiB free ram at all times? I'm
I've got the following in my kernel configuration file
and in /boot/loader.conf
On two machines with 2G of RAM....both 8.0-RC1/i386
the ZFS tuning guide gives a better idea of how to play with things like that
Am I reading correctly that vm.kmem_size is how much ram the kernel
initially allocates for itself on boot? And that vm.kmem_size_min
and vm.kmem_size_max are the range that vm.kmem_size is allowed to
float naturally within at runtime?
Is KVA_PAGES the hard max space the kern is allowed/capable of
addressing/using at runtime? Such that I could set kmem_size_max
to the KVA_PAGES limit and then vm.kmem_size will grow into it as
With the caveat of course that with the below defaults and hardware,
if I just bumped vm.kmem_size_max to 1GiB [as per KVA_PAGES limit]
I'd have to add maybe another 1GiB ram so that this new vm.kmem_size_max
kernel reservation wouldn't conflict with userland memory needs
when vm.kmem_size grows to it?
And KVA_PAGES is typically say 1/3 of installed ram?
If vm.kmem_size starts out being under vm.kmem_size_max, can user
apps use the unused space (vm.kmem_size_max - vm.kmem_size) until
vm.kmem_size grows to vm.kmem_size_max and the kernel kills them
off? Or can user apps only use (ram = user apps + [KVA_PAGES hard
limit and/or vm.kmem_size_max])?
What is the idea behind setting vm.kmem_size = vm.kmem_size_max?
Should not just vm.kmem_size_max be set and allow vm.kmem_size
[unset] to grow up to vm.kmem_size_max instead of allocating it all
at boot with vm.kmem_size?
Maybe someone can wikify these answers?
I think I need to find more to read and then test one by one to
see what changes.
With untuned defaults and 1GiB ram I have:
#define KVA_PAGES 256 # gives 1GiB kern address space
vm.kmem_size_max: 335544320 # auto calculated by the kernel at boot?
Less than KVA_PAGES?
vm.kmem_size: 335544320 # amount in use at runtime?
I'm still figuring out how to find and add all the kernel memory.
vfs.zfs.arc_meta_used: 56241732 # greater than meta_limit?
kstat.zfs.misc.arcstats.p: 20589785 # was 104857600 on boot
kstat.zfs.misc.arcstats.c: 128292242 # was 209715200 on boot
vm.kmem_size Sets the size of kernel memory (bytes). This overrides the
value determined when the kernel was compiled. Modifies
Sets the minimum and maximum (respectively) amount of ker-
nel memory that will be automatically allocated by the ker-
nel. These override the values determined when the kernel
was compiled. Modifies VM_KMEM_SIZE_MIN and
* Size of Kernel address space. This is the number of page table pages
* (4MB each) to use for the kernel. 256 pages == 1 Gigabyte.
* This **MUST** be a multiple of 4 (eg: 252, 256, 260, etc).
* For PAE, the page table page unit size is 2MB. This means that 512 pages
* is 1 Gigabyte. Double everything. It must be a multiple of 8 for PAE.
#define KVA_PAGES 512
#define KVA_PAGES 256
# Change the size of the kernel virtual address space. Due to
# constraints in loader(8) on i386, this must be a multiple of 4.
# 256 = 1 GB of kernel address space. Increasing this also causes
# a reduction of the address space in user processes. 512 splits
# the 4GB cpu address space in half (2GB user, 2GB kernel). For PAE
# kernels, the value will need to be double non-PAE. A value of 1024
# for PAE kernels is necessary to split the address space in half.
# This will likely need to be increased to handle memory sizes >4GB.
# PAE kernels default to a value of 512.
More information about the freebsd-stable