Max. number of opened files, efficiency

Laszlo Nagy gandalf at shopzeus.com
Wed Aug 13 14:44:14 UTC 2008


> Directories generally start to perform poorly when you put too many files
> in them (i.e. the time required to add a new directory entry or find
> an existing name in the entry goes up)
>
> If you're going to be making 10s of 1000s of files, I'd recommend making
> a tree of directories.  I.e., make directories 1 - 10, then put files
> 0-999 in directory 1 and files 1000-1999 in directory 2, etc
>   
In fact I do not need any name associated with the file. I just need a 
temporary file object, I would like to access it in read write mode and 
then throw it. For some reason, this kind of temporary file is 
implemented this way (at least in Python):

1. create file name with mkstemp
2. create file object with that name
3. save the file handle number
4. unlink the file name (remove directory entry)
5. return the file handle (that can be closed later)

This is executed each time I create a temporary file. As you can see, 
the number of entries in the tmp directory won't increase at all. (If it 
would be possible, I would create a file without a name for the first time.)

When I close the file handle, the OS will hopefully deallocate the disk 
space because from that point, nothing references the file.

Another interesting (offtopic) question is that I could not open 10 000 
files under Windows XP. Error was "too many open file". How to overcome 
this?

Thanks,

   Laszlo



More information about the freebsd-questions mailing list