ZFS directory with a large number of files

Sean Rees seanrees at gmail.com
Sun Aug 7 09:20:21 UTC 2011


On Aug 6, 2011, at 07:24, Gary Palmer wrote:

> On Fri, Aug 05, 2011 at 08:56:36PM -0700, Doug Barton wrote:
>> On 08/05/2011 20:38, Daniel O'Connor wrote:
>> 
>>> Ahh, but OP had moved these files away and performance was still poor.. _that_ is the bug.
>> 
>> I'm no file system expert, but it seems to me the key questions are; how
>> long does it take the system to recover from this condition, and if it's
>> more than N $periods is that a problem? We can't stop users from doing
>> wacky stuff, but the system should be robust in the face of this.
> 
> Its been quite a while since I worked on the filesystem stuff in any
> detail but I believe, at least for UFS, it doesn't GC the directory,
> just truncate it if enough of the entries at the end are deleted
> to free up at least one fragment or block.  If you create N files and
> then a directory and move the N files into the directory, the directory
> entry will still be N+1 records into the directory and the only way to
> "recover" is to recreate the directory that formerly contained the N
> files.  It is theoretically possible to compat the directory but since 
> the code to do that wasn't written when I last worked with UFS I suspect
> its non trivial.
> 
> I don't know what ZFS does in this situation

It sounds like it does something similar.

I re-ran the experiment to see if I could narrow down the problem.

% mkdir foo
% cd foo && for i in {1..1000}; do touch $i; done
% ls > list
% for file in $(cat list); do rm -f $file; done
% time ls
(slow!)
% rm -f list
% time ls
(slow!)

I would like to dig into this a bit more, I suppose it's probably a good enough reason to explore how DTrace works :)

Sean


More information about the freebsd-stable mailing list