Millions of small files: best filesystem / best options

Doug Barton dougb at FreeBSD.org
Tue May 29 10:45:21 UTC 2012


On 5/29/2012 2:15 AM, Alessio Focardi wrote:
>>> I ran a Usenet server this way for quite a while with fairly
>>> good results, though the average file size was a bit bigger,
>>> about 2K or so. I found that if I didn't use "-o space" that
>>> space optimization wouldn't kick in soon enough and I'd tend to
>>> run out of full blocks that would be needed for larger files.
> 
> Fragmentation is not a problem for me, mostly I will have a write
> once-read many situation, still is not clear to me if "-o space"
> works in the constraints of the block/fragment ratio, that in my case
> it would still mean that I will have to use a 512 bytes subblock for
> every 200 byte files.

TMK you can't have more than one file per fragment, but due to metadata
you're not really wasting as much space as it sounds.

If your data is truly WORM, I'd definitely give -o space a try. You'll
probably want to benchmark various combinations anyway.

Doug

-- 

    This .signature sanitized for your protection


More information about the freebsd-fs mailing list