ZFS cpu requirements, with/out compression and/or dedup

krad kraduk at gmail.com
Mon Sep 21 14:11:42 UTC 2015


Nope DDT is only used for writes, zfs uses a free block space map, so only
when a block is completely unreferenced will it be written to. The DDT is a
table of blocks and their checksums.

https://blogs.oracle.com/bonwick/en/entry/space_maps

http://www.c0t0d0s0.org/archives/7271-ZFS-Dedup-Internals.html

there are probably better references

On 21 September 2015 at 14:57, Quartz <quartz at sneakertech.com> wrote:

> This is completely untrue,  there performance issues with dedup are
>> limited to writes only, as it needs to check the DDT table for every
>> write to the file system with dedup enabled. Once the data is on the
>> disk there is no overhead, and in many cases a performance boost as less
>> data on the disk means less head movement and its also more likely to be
>> in any available caches. If the write performance does become an issue
>> you can turn it off on that particular file system. This may cause you
>> to no longer have enough capacity on the pool, but then pools are easily
>> extended.
>>
>
> It still needs to keep the tables in memory as long as there's still
> deduped data on disk though, right? Else what keeps track of which blocks
> are used by which files?
>
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


More information about the freebsd-fs mailing list