ZFS cpu requirements, with/out compression and/or dedup
Quartz
quartz at sneakertech.com
Mon Sep 21 14:10:48 UTC 2015
>> Any algorithm for TB's of storage and cpu/ram is usually wrong.
>
> dedup is kind of a special case though, because it has to keep the
> entire DDT in non-paged ram (assuming you want the machine to be usable).
>
> Of course, the rule of thumb is for USED space. 40TB of blank space
> won't need any ram obviously.
Also, just for reference: according to the specs each entry in the dedup
table costs about 320 bytes of memory per block of disk. This means that
AT BEST (assuming ZFS decides to use full 128K blocks in your case)
you'll need 2.5GB of ram per 1 TB of used space just for the DDT stuff
(not counting ARC and everything else). Most systems are probably not
going to be lucky enough to have 128K blocks though, so in real world
terms you're looking at several GB of ram per TB of disk, and in worst
case scenarios you might need a couple hundred GB... but at that point
you should be offloading the DDT onto fast SSD L2ARC.
More information about the freebsd-fs
mailing list