kernel killing processes when out of swap
Matthias Buelow
mkb at mkbuelow.net
Tue Apr 12 11:16:58 PDT 2005
Dan Nelson <dnelson at allantgroup.com> writes:
>Another issue is things like shared libraries; without overcommit you
>need to reserve the file size * the number of processes mapping it,
>since you can't guarantee they won't touch every COW page handed to
>them. I think you can design a shlib scheme where you can map the libs
>RO; not sure if you would take a performance hit or if there are other
>considerations. There's a similar problem when large processes want to
>fork+exec something; for a fraction of a second you need to reserve 2x
>the process's space until the exec frees it. vfork solves that
>problem, at the expense of blocking the parent until the child's
>process is loaded.
Is that really problematic these days, with huge disk sizes? I mean, a
couple GB swap don't really hurt anyone these days when you've got disk
sizes around 250GB. Especially when you gain a lot more reliable
operation through this. And maybe one could make overcommitting
configurable, so that all scenarios are provided for. I for one would
happily add some more swap space if I could get the behaviour that the
OS doesn't go politician and promise all and everything which it then
cannot deliver. Overcommitting made sense in the early 90ies, when you
had a large address space (4GB) and relatively small disks (~1GB). I'm
not sure it makes much sense anymore, it's a typical kludge.
This stuff has been discussed in the past. It'll probably continue to
be an issue, until it has been resolved satisfactorily (i.e., both the
overcommitters and reliable-VMers can have their way).
mkb.
More information about the freebsd-stable
mailing list