ls -l takes a forever to finish
bv at wjv.com
Fri Nov 30 04:40:59 PST 2007
On Fri, Nov 30, 2007 at 05:49 ,
freebsd-questions-request at freebsd.org moved his mouse, rebooted
for the change to take effect, and then said:
> Date: Thu, 29 Nov 2007 08:42:44 -0500
> From: Bill Moran <wmoran at potentialtech.com>
> Subject: Re: ls -l takes a forever to finish.
> In response to Wojciech Puchar <wojtek at wojtek.tensor.gdynia.pl>:
> > > ls | wc
> > strange. i did
> > [wojtek at wojtek ~/b]$ a=0;while [ $a -lt 10000 ];do mkdir $a;a=$[a+1];done
> > completed <25 seconds on 1Ghz CPU
> > ls takes 0.1 seconds user time, ls -l takes 0.3 second user time.
> > unless you have 486/33 or slower system there is something wrong.
> Another possible scenario is that the directory is badly fragmented.
> Unless something has changed since I last researched this (which is
> possible) FreeBSD doesn't manage directory fragmentation during use.
> If you're constantly adding and removing files, it's possible that
> the directory entry is such a mess that it takes ls a long time to
> process it.
> Of course, Wojciech's test won't demonstrate this, as the directory is
> freshly created, even to the point that the filenames are actually in
> alpha order in the directory.
> One method to test this would be to tar up the directory and extract
> it somewhere else on the machine (assuming you have enough space to do
> so). If the newly created directory doesn't have the problem, it's
> likely that the directory entry has become a mess. Use ls -l to
> compare the sizes of the actual directories themselves as a little
There is a way to recreate the directory tree >without< having to
use up a lot of free space. I used to do this when (in the old
days) I was running a news node and some hierarchies would get so
so large that directory access would be very slow even after
the expire because the directory was so huge.
Read the man page for cpio and use the -pdlmv option.
This will create a new directory tree using ONLY links, so that you
have done nothing except make a directory and have moved NO files
Then you can remove all the files in the original directory,
returning the link count on each file to 1, and have an optimized
directory that has all the files of the original. And if I recall
correctly it will act like the tar utility where all the
sub-directories will be in the first of each directory, thus
reducing search time.
I have >not< used the GNU version of this but used it all the time
on SysV based systems I maintained - but it should be the same
[however I have noticed that sometimes GNU based things have
subtle changes at times].
> Anyway, if that turns out to be the problem, you can fix it by taring
> the directory and then restoring it from the tarfile. Not an ideal
> solution, mind you.
Try the above sometime >>IF EVERYTHING IS ON THE SAME FILE SYSTEM<<
and prepare to be amazed. It's fast.
> Bill Moran
Bill Vermillion - bv @ wjv . com
More information about the freebsd-questions