tmpfs is overly aggressive on memory usage

From: Mike Karels <>
Date: Wed, 13 Dec 2023 01:25:55 UTC
I have been looking at tmpfs and doing some experiments.  I had noticed
that tmpfs defaults to using a memory limit of "available amount of memory
(including main memory and swap space)", which seemed overly optimistic.
It isn't as bad as I first thought, as it is looking at current free memory
and swap space.  However, tests writing a file until something fails caused
all of swap space to be exhausted, and processes started getting killed.
In my test, this started with a large memory hog, but then killed root shells,
nfsd, etc.  This seems bad, and the system was screwed up enough that there
was no way to reboot it short of a reset.

One part of the problem was that with a default size, tmpfs enforced the limit
when creating a file but not when writing it.  I think this is an outright bug,
and I have a proposed fix in  It has the
disadvantage that it can fail a write with ENOSPC due to a transient memory
load spike.  The memory limit is aggressive enough that the system is very close
to memory exhaustion, though, as well as swap space exhaustion (assuming there
is swap).

However, the limit is still high enough that continuous writes were likely
to cause processes to be killed and even system hangs.  I decided to try
increasing the memory reserve; currently tmpfs has a memory reserve threshold
of 4 MB.  I ended up with a default reserve based on a percentage of memory
plus swap, allowing 95% of available space to be used.  This works fairly
well in my tests, failing writes when the system is close to the edge and
paging fairly heavily, but without killing processes.  However, other
memory load was essentially constant; these changes cannot predict changes
in other memory demand, just react to the current situation.  This change
is in

I'm interested in any feedback, in particular other approaches that might
be more general.