SOLVED Disk-Performace issue?

Michael Schuh michael.schuh at gmail.com
Tue May 10 06:11:41 PDT 2005


Hello,

thanks to all who gave me any suggestion on my
request.

The Tip from Charles was only the beginning.
The last step was to setting vfs.ufs.dirhash_maxmem
via sysctl to an higher value, in my case 20MB.

The copying from all 523000 files has used over 7MB
dirhash_mem.

Now after setting the ufs.dirhash_maxmem i have the performance
from 4-5 MByte/s.

i thanks all people that gave me the Power to serve :-)))

regards

Michael

2005/5/10, Charles Swiger <cswiger at mac.com>:
> On May 10, 2005, at 6:46 AM, Michael Schuh wrote:
> > Now i have 2 Directories with ~500.000-600.000 files with an size of
> > ~5kByte.
> > by copying the files from one disk to another or an direktory on the
> > same disk
> > (equal behavior), i can see this behavior:
> > [ ... ]
> > Can anyone explain me from where this behavior can come?
> > Come thie eventually from the filesytem, or from my disks, so that
> > these are to hot? (I think not)
> 
> Directories are kept as lists.  Adding files to the end of a list takes
> a longer time, as the list gets bigger.  There is a kernel option
> called DIRHASH (UFS_DIRHASH?) which can be enabled which will help this
> kind of situation out significantly, but even with it, you aren't going
> to get great performance when you put a half-million files into a
> single directory.
> 
> Try breaking this content up into one or two levels of subdirectories.
> See the way the Squid cache works...
> 
> --
> -Chuck
> 
>


More information about the freebsd-stable mailing list