default file descriptor limit ?

Bruce Evans brde at optusnet.com.au
Mon Apr 13 09:47:11 UTC 2015


On Mon, 13 Apr 2015, Poul-Henning Kamp wrote:

> --------
> In message <78759.1428912996 at critter.freebsd.dk>, Poul-Henning Kamp writes:
>> 	$ limits
>> 	Resource limits (current):
>> 	[...]
>> 	openfiles              462357
>>
>> say what ?
>>
>> This wastes tons of pointless close system calls in programs which
>> use the suboptimal but best practice:
>>
>> 	for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
>> 		close(i);

sysconf() takes about as long as a failing close(), so best practice
is to cache the result of sysconf().  Best practice also requires
error checking.

>> For reference Linux seems to default to 1024, leaving it up to
>> massive server processes to increase the limit for themselves.
>>
>> I'm all for autosizing things but this is just plain stupid...

I would have used the POSIX/C limit of 20 for the default, leaving
it up to mere bloatware to increase the limit.  It is too late for
that.  Next best is a default of RLIM_INFINITY.  In FreeBSD-1,
RLIM_INFINITY was only 32 bits, so was only 5 times larger than
the above.  Now it is 64 bits, so it is 20 billion times larger.
Getting the full limit also requires a 64-bit system, since
sysconf() only returns long.  sysconf(_SC_OPEN_MAX) doesn't even
work on 32-bit systems if the limit is above LONG_MAX.

> Just to give an idea how utterly silly this is:
>
> 	#include <stdio.h>
> 	#include <unistd.h>
>
> 	int
> 	main(int c, char **v)
> 	{
> 		int i, j;
>
> 		for (j = 0; j < 100; j++)
> 			for (i = 3; i < sysconf(_SC_OPEN_MAX); i++)
> 				close(i);
> 		return (0);
> 	}
>
> Linux:  	 0.001 seconds
> FreeBSD:	17.020 seconds

1 millisecond is a lot too.

For full silliness:
- optimize as above so that this takes half as long
- increase the defaullt so that it takes 20 billion times longer.
   17.020 / 2 * 20 billion seconds = 5393+ years.

> PS: And don't tell me to fix all code in /usr/ports to use closefrom(2).

I don't see any way to fix ports.  I few might break with the limit of
1024.  The only good thing is that the Linux limit is not very large
and any ports that need a larger limit have probably been made to work
under Linux.

Worse but correct practice is the use the static limit of OPEN_MAX iff
it is defined.  Only broken systems like FreeBSD define it if the
static limit is different from the dynamic limit.  In FreeBSD, it is
64, so naive software that trusts the limit gets much faster loops than
the above without really trying.

libc sysconf() has poor handling of unrepresentable rlimits in all cases
(just 2 cases; the other one is _SC_CHILD_MAX.  The static limit CHILD_MAX
is broken by its existence in FreeBSD in the same way as OPEN_MAX):

X 	case _SC_OPEN_MAX:
X 		if (getrlimit(RLIMIT_NOFILE, &rl) != 0)
X 			return (-1);
X 		if (rl.rlim_cur == RLIM_INFINITY)
X 			return (-1);

This is not an error, just an unrepresentable limit.  This fails to
set errno to indicate the error (getrlimit() didn't since this is
not an error).  This works in practice because it is unreachable
-- the kernel clamps this particular rlimit, so RLIM_INFINITY is
impossible.

X 		if (rl.rlim_cur > LONG_MAX) {
X 			errno = EOVERFLOW;
X 			return (-1);
X 		}

As above, except it sets errno.  If this were reachable, then it
would cause problems for buggy applications that don't check for
errors.  But this case shouldn't be an error.  LONG_MAX file
descriptors should be enough for anybloatware.  When 32-bit
LONG_MAX runs out, the bloatware can simply require a 64-bit
system.

X 		return ((long)rl.rlim_cur);

Bruce


More information about the freebsd-arch mailing list