Directories with 2million files
Brad Knowles
brad.knowles at skynet.be
Thu Apr 22 17:58:53 PDT 2004
At 4:31 PM -0400 2004/04/22, Robert Watson wrote:
> Unfortunately, a lot of this has to do with the desire to have programs
> behave nicely in ways that scale well only to a limited extent. I.e.,
> sorting and sizing of output. If you have algorithms that require all
> elements in a large array be in memory, such as sorting algorithms, it's
> inevitably going to hurt.
Sorting routines do not necessarily have to keep everything in
memory. There have been sorting routines from the 1940s & 1950s that
were designed for low-memory operations. There are many databases
today that have data greatly exceeding the memory requirements of any
machine that we could possibly build -- at least, with current
technology. On some of them, even the indexes greatly exceed memory
requirements.
However, you do need to be able to know when to switch to such
algorithms. I believe that this might require some external support,
such as indexes within the filesystem. Depending on the
implementation, this might require changes from all applications
which have moderate or deep interaction with the filesystem -- which
is a real problem.
--
Brad Knowles, <brad.knowles at skynet.be>
"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety."
-Benjamin Franklin, Historical Review of Pennsylvania.
SAGE member since 1995. See <http://www.sage.org/> for more info.
More information about the freebsd-current
mailing list