SSD recommendations for ZFS cache/log

Artem Belevich art at freebsd.org
Tue Nov 20 04:20:50 UTC 2012


On Mon, Nov 19, 2012 at 8:02 PM,  <kpneal at pobox.com> wrote:
> Advising people to use dedup when high dedup ratios are expected, and
> advising people to otherwise not use dedup, is by itself incorrect advice.
> Rather, dedup should only be enabled on a system with a large amount of
> memory. The usual advice of 1G of ram per 1TB of disk is flat out wrong.
>
> Now, I do not know how much memory to give as a minimum. I suspect that
> the minimum should be more like 16-32G, with more if large amounts of
> deduped data are to be removed by destroying entire datasets. But that's
> just a guess.

For what it's worth, Oracle has published an article on memory sizing
for dedupe.
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-size-zfs-dedup-1354231.html

In a nutshell, it's 320 bytes per record. Number of records will
depend on your data set and the way it's been written.

--Artem


More information about the freebsd-fs mailing list