Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?

Alfred Perlstein alfred at freebsd.org
Fri May 22 07:34:00 UTC 2009


* Yuri <yuri at rawbw.com> [090521 10:52] wrote:
> Nate Eldredge wrote:
> >Suppose we run this program on a machine with just over 1 GB of 
> >memory. The fork() should give the child a private "copy" of the 1 GB 
> >buffer, by setting it to copy-on-write.  In principle, after the 
> >fork(), the child might want to rewrite the buffer, which would 
> >require an additional 1GB to be available for the child's copy.  So 
> >under a conservative allocation policy, the kernel would have to 
> >reserve that extra 1 GB at the time of the fork(). Since it can't do 
> >that on our hypothetical 1+ GB machine, the fork() must fail, and the 
> >program won't work.
> 
> I don't have strong opinion for or against "memory overcommit". But I 
> can imagine one could argue that fork with intent of exec is a faulty 
> scenario that is a relict from the past. It can be replaced by some 
> atomic method that would spawn the child without ovecommitting.

vfork, however that's not sufficient for many scenarios.

> Are there any other than fork (and mmap/sbrk) situations that would 
> overcommit?

sysv shm?  maybe more.

-- 
- Alfred Perlstein


More information about the freebsd-hackers mailing list