ZFS directory with a large number of files

seanrees at gmail.com seanrees at gmail.com
Tue Aug 2 09:16:37 UTC 2011


inline

On Tue, Aug 2, 2011 at 10:08 AM, Jeremy Chadwick
<freebsd at jdc.parodius.com> wrote:
> On Tue, Aug 02, 2011 at 08:39:03AM +0100, seanrees at gmail.com wrote:
>> On my FreeBSD 8.2-S machine (built circa 12th June), I created a
>> directory and populated it over the course of 3 weeks with about 2
>> million individual files.
>
> I'll keep this real simple:
>
> Why did you do this?
>
> I hope this was a stress test of some kind.  If not:

Not really, but it turned into one.

The camera I was using had the ability (rather handily) to upload a
still image once per second via FTP to a server of my choosing. It
didn't have the ability to organize them for me in a neat directory
hierarchy. So on holidays I went for 3 weeks and came back to ~2M
images in the same directory.


> This is the 2nd or 3rd mail in recent months from people saying "I
> decided to do something utterly stupid with my filesystem[1] and now I'm
> asking why performance sucks".
>
> Why can people not create proper directory tree layouts to avoid this
> problem regardless of what filesystem is used?  I just don't get it.


I'm not sure it's utterly stupid; I didn't expect legendarily fast
performance from 'ls' or anything else that enumerated the contents of
the directory when all the files were there. Now that the files are
neatly organized, I expected fstatfs() on the directory to become fast
again. It isn't. I'd like to understand why (or maybe learn a new
trick or two about inspecting ZFS...)


Sean


More information about the freebsd-stable mailing list