Why kernel kills processes that run out of memory instead of
just failing memory allocation system calls?
Yuri
yuri at rawbw.com
Thu May 21 17:52:29 UTC 2009
Nate Eldredge wrote:
> Suppose we run this program on a machine with just over 1 GB of
> memory. The fork() should give the child a private "copy" of the 1 GB
> buffer, by setting it to copy-on-write. In principle, after the
> fork(), the child might want to rewrite the buffer, which would
> require an additional 1GB to be available for the child's copy. So
> under a conservative allocation policy, the kernel would have to
> reserve that extra 1 GB at the time of the fork(). Since it can't do
> that on our hypothetical 1+ GB machine, the fork() must fail, and the
> program won't work.
I don't have strong opinion for or against "memory overcommit". But I
can imagine one could argue that fork with intent of exec is a faulty
scenario that is a relict from the past. It can be replaced by some
atomic method that would spawn the child without ovecommitting.
Are there any other than fork (and mmap/sbrk) situations that would
overcommit?
Yuri
More information about the freebsd-hackers
mailing list