Improving ZFS performance for large directories

Kevin Day toasty at dragondata.com
Wed Jan 30 15:15:10 UTC 2013


On Jan 30, 2013, at 4:20 AM, "Ronald Klop" <ronald-freebsd8 at klop.yi.org> wrote:

> On Wed, 30 Jan 2013 00:20:15 +0100, Kevin Day <toasty at dragondata.com> wrote:
> 
>> 
>> I'm trying to improve performance when using ZFS in large (>60000 files) directories. A common activity is to use "getdirentries" to enumerate all the files in the directory, then "lstat" on each one to get information about it. Doing an "ls -l" in a large directory like this can take 10-30 seconds to complete. Trying to figure out why, I did:
>> 
>> ktrace ls -l /path/to/large/directory
>> kdump -R |sort -rn |more
> 
> Does ls -lf /pat/to/large/directory make a difference. It makes ls not to sort the directory so it can use a more efficient way of traversing the directory.
> 
> Ronald.

Nope, the sort seems to add a trivial amount of extra time to the entire operation. Nearly all the time is spent in lstat() or getdirentries(). Good idea though!

-- Kevin



More information about the freebsd-fs mailing list