Millions of small files: best filesystem / best options

Adam Nowacki nowakpl at platinum.linux.pl
Mon May 28 16:00:34 UTC 2012


I don't think any regular file system will be able to cope. If you 
really need this as files start with sysutils/fusefs-sqlfs and then 
maybe look for postgresql or mysql fusefs modules.

On 2012-05-28 15:21, Alessio Focardi wrote:
> Hi,
>
> I'm pretty new to BSD, but I do have some knowledge in Linux.
>
> I'm looking for some advice to efficiently pack millions of small files (200 bytes or less) over a freebsd fs.
>
> Those files will be stored in an hierarchical directory structure to limit the number of files for any directory and so (I hope!) speed up file lookups/deletion.
>
> I have to say that I'm looking at fbsd for my project because both UFS2 and ZFS have some flavour of "block suballocation" "tail packing" "variable record size", at least documentation says so.
>
> My hope is to waste as less space as possible, even sacrificing some speed: can't use a full block for a single file: I will end up wasting 99% of the space!
>
>
> Do someone got some experience in a similar situation, and it's willing to give some advice on which fs I should choose and how to tune it for this particular scenario?
>
>
> Thank you very much, appreciated!
>
>
> ps
>
> I know that probably a database will fit better in this situation, but in my case I can't take that route :(
>
>
>
> Alessio Focardi
> ------------------
>
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list