ZFS/compression/performance

Johannes Totz jtotz at imperial.ac.uk
Wed Oct 12 12:03:04 UTC 2011


On 12/10/2011 00:25, Dennis Glatting wrote:
> I would appreciate someone knowledgeable in ZFS point me in the right
> direction.
> 
> I have several ZFS arrays, some using gzip for compression. The
> compressed arrays hold very large text documents (10MB->20TB) and are
> highly compressible. Reading the files from a compressed data sets is
> fast with little load. However, writing to the compressed data sets
> incurs substantial load on the order of a load average from 12 to 20.
> 
> My questions are:
> 
> 1) Why such a heavy load on writing?
> 2) What kind  of limiters can I put into effect to reduce load
>    without impacting compressibilty? For example, is there some
>    variable to controls the number of parallel compression
>    operations?
> 
> I have a number of different systems. Memory is 24GB on each of the two
> large data systems, SSD (Revo) for cache, and a SATA II ZIL. One system
> is a 6 core i7 @ 3.33 GHz and the other 4 core ii7 @ 2.93 GHz. The
> arrays are RAIDz using cheap 2TB disks.

Artem gave you a pretty good explanation.
I just did a simple write test yesterday:

1) 6 MB/sec for gzip, 1.36x ratio
2) 34 MB/sec for lzjb, 1.23x ratio

I'll stick with lzjb. It's good enough to get rid of most of the
redundancy and speed is acceptable.



More information about the freebsd-fs mailing list