default file descriptor limit ?

Bruce Evans brde at optusnet.com.au
Mon Apr 13 10:15:37 UTC 2015


On Mon, 13 Apr 2015, Poul-Henning Kamp wrote:

> --------
> In message <20150413083159.GN1394 at zxy.spb.ru>, Slawa Olhovchenkov writes:
>
>>>> This wastes tons of pointless close system calls in programs which
>>>> use the suboptimal but best practice:
>>>>
>>>> 	for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
>>>> 		close(i);
>>>>
>>>> For reference Linux seems to default to 1024, leaving it up to
>>>> massive server processes to increase the limit for themselves.
>>
>> This is typical only on startup, I think?
>
> No.  This is mandatory whenever you spawn an sub process with less privilege.

Not quite.  sysconf() returns the soft rlimit.  Privilege is not need
to change the soft rlimit back and forth between 0 and the hard rlimit.

>> May be now time to introduce new login class, for desktop users, [...]
>
> How about "now is the time to realize that very few processes need more
> than a few tens of filedescriptors" ?
> 
> If Linux can manage with a hardcoded default of 1024, so can we...

RLIM_INFINITY seems reasonable for the hard limit and 1024 for the
soft limit.  Large auto-configed values like 400000 are insignificantly
different from infinity anyway.  They are per-process, so even the limits
of 11000 on my small systems are also essentially infinite.

There are also the kern.maxfilesperproc and kern.maxfiles limits.  These
are poorly implemented, starting with their default values.
maxfilesperproc defaults to the same value as the rlimit.  So a single
process that allocates up to its rlimit makes it impossible for any
other process, even privileged ones, to get anywhere near their rlimit.
Some over-commit is needed, but not this much.  This has hacks to let
privileged processes allocate a few more descriptors provided priv.
processes never over-commit.

Bruce


More information about the freebsd-arch mailing list