About reading and writing to files
freebsd-questions-local at be-well.no-ip.com
Fri May 30 06:16:01 PDT 2003
Rich Morin <rdm at cfcl.com> writes:
> At 3:04 AM -0500 5/30/03, Bingrui Foo wrote:
> >I'm wondering in freeBSD, if I have a directory with 10,000 files, or
> >maybe even 100,000 files, each about 5 kb long. Wondering will reading and
> >writing to any one of these files in C be affected by the sheer number of
> >these files? Will the access time be affected significantly?
> >Just wondering because not sure whether I should put these data in a
> >database or just use files with unique names.
> >Also will separating the files into many directories help?
> Looking up .../x/12/34/56 can be done in logarithmic time (i.e., look up
> .../x/12, then .../x/12/34, then .../x/12/34/56); looking up
> .../y/123456 (unless some optimization has been added) will require a
> linear scan
> through the directory. In short, don't go there...
An optimization *has* been added. If you have
options UFS_DIRHASH #Improve performance on big directories
in your kernel (it's been in GENERIC for at least several months) then
you should get (in the limit) logarithmic time on *each* lookup. And
there's a large extra term in the denominator, as well.
The size of the files doesn't matter, and the number of files
shouldn't matter in the range of 10,000 files. Whether it matters on
100,000 I can't guess offhand, but obviously it will depend on
how often the application is doing a lookup.
More information about the freebsd-questions