filesystem performance with lots of small files
Peter Jeremy
PeterJeremy at optushome.com.au
Fri Aug 26 10:14:43 GMT 2005
On Thu, 2005-Aug-25 19:58:02 +0200, Marian Hettwer wrote:
>Back to the topic. I have a directory with several thousands (800k and
>more) small files. UFS2 shows a pretty low performance.
Is your problem lots of small files or lots of files in a single
directory? These are totally different problems. And what do you
mean by "pretty low performance"? What are you measuring?
Unix filesystems use linear searching of directories. UFS and
UFS_DIRHASH have some performance improvements but at some point you
need to scan the entire directory to determine if a filename is or is
not present. Your solution is to avoid having lots of files in a
single UFS directory: Either use a directory tree (like squid and
some inn options) or use an inode filesystem (which I thought had
been committed but I can't see it in NOTES).
For "lots of small files", any filesystem is going to have relatively
low I/O performance because the overheads involved in accessing the
first block of a file are fixed and you don't get any benefit from
large-block sequential read-ahead that means that reading 64K-128K
isn't much slower than reading 1K.
--
Peter Jeremy
More information about the freebsd-current
mailing list