4.8 ffs_dirpref problem

Ken Marx kmarx at vicor.com
Thu Oct 23 11:12:58 PDT 2003


Thanks for the reply,

We actually *did* try -s 4096 yesterday (not quite what you suggested)
with spotty results: Sometimes it seemed to go more quickly, but often
not.

Let me clarify our test: We have a 1.5gb tar file from our production
raid that fairly represents the distribution of data. We hit the
performance problem when we get to dirs with lots of small-ish files.
But, as Julian mentioned, we typically have many flavors of file
sizes and populations.

Admittedly, our untar'ing test isn't necessarily representitive
of what happens in production - we were just trying to fill the
disk and recreate the problem here. We *did* at least hit a noticeable
problem, and we believe it's the same behavior that's hitting production.

I just tried your exact suggested settings on an fs that was
already 96% full, and still experienced the very sluggish
behavior on exactly the same type of files/dirs.

Our untar typically takes around 60-100 sec of system time
when things are going ok; 300-1000+ sec when the sluggishness occurs.
This time tends to increase as we get closer to 99%. Sometimes
as high as 4000+ secs.

I wasn't clear from your mail if I should newfs the entire
fs and start over, or if I could have expected the settings
to make a difference for any NEW data.

I can do this latter if you think it's required. The test will
then take several hours to run since we need at least 85% disk usage
to start seeing the problem.

Thanks!
k

Julian Elischer wrote:
>>From mckusick at beastie.mckusick.com  Wed Oct 22 22:30:03 2003
>>X-Original-To: julian at vicor-nb.com
>>Delivered-To: julian at vicor-nb.com
>>To: Ken Marx <kmarx at vicor.com>
>>Subject: Re: 4.8 ffs_dirpref problem 
>>Cc: freebsd-fs at freebsd.org, cburrell at vicor.com, davep at vicor.com,
>>	jpl at vicor.com, jrh at vicor.com, julian at vicor-nb.com, VicPE at aol.com,
>>	julian at vicor.com, Grigoriy Orlov <gluk at ptci.ru>
>>In-Reply-To: Your message of "Wed, 22 Oct 2003 12:57:53 PDT."
>>             <20031022195753.27C707A49F at mail.vicor-nb.com> 
>>Date: Wed, 22 Oct 2003 16:37:54 -0700
>>From: Kirk McKusick <mckusick at beastie.mckusick.com>
> 
> 
>>I believe that you can dsolve your problem by tuning the existing
>>algorithm using tunefs. There are two parameters to control dirpref,
>>avgfilesize (which defaults to 16384) and filesperdir (which defaults
>>to 50). I suggest that you try using an avgfilesize of 4096 and
>>filesperdir of 1500. This is done by running tunefs on the unmounted
>>(or at least mounted read-only) filesystem as:
> 
> 
>>	tunefs -f 4096 -s 1500 /dev/<disk for my broken filesystem>
> 
> 
> On the same filesystem are directories that contain 1GB files
> and others that contain maybe 100 100K files (images)
> 
> 
> 
>>Note that this affects future layout, so needs to be done before you
>>put any data into the filesystem. If you are building the filesystem
>>from scratch, you can use:
> 
> 
> would this have an effect on an existing filesystem with respect to new data
> being added to it?
> 
> 
> 
> 
> 
>>	newfs -g 4096 -h 1500 ...
>>
>>to set these fields. Please let me know if this solves your problem.
>>If it does not, I will ask Grigoriy Orlov <gluk at ptci.ru> if he has
>>any ideas on how to proceed.
> 
> 
>>	Kirk McKusick
> 
> 
>>=-=-=-=-=-=-=
> 
> 
> 

-- 
Ken Marx, kmarx at vicor-nb.com
It's too costly to get lean and mean and analyze progress on the diminishing 
expectations.
		- http://www.bigshed.com/cgi-bin/speak.cgi



More information about the freebsd-fs mailing list