Is LZ4 compression of the ZFS L2ARC available in any RELEASE/STABLE?

Olav Gjerde olav at backupbay.com
Wed Mar 5 07:24:19 UTC 2014


Currently I've set the recordsize to 8k, however I'm thinking maybe a
recordsize of 4k may more optimal?
This is because the compressratio with LZ4 is around 2.5 and this value has
been constant for all my data while growing from a few megabytes to a
tenfold of gigabytes.
Maybe something I should play with to see if it makes a difference.


On Wed, Mar 5, 2014 at 3:40 AM, Bob Friesenhahn <
bfriesen at simple.dallas.tx.us> wrote:

> On Tue, 4 Mar 2014, Olav Gjerde wrote:
>
>  I managed to mess up who I replied to and Matthew replied back with a good
>> answer which I think didn't reach the mailing list.
>>
>> I actually have a problem with query performance in one of my databases
>> related to running PostgreSQL on ZFS. Which is why I'm so interested in
>> compression for the L2ARC Cache. The problem is random IO read were
>> creating a report were I aggregate 75000 rows takes 30 minutes!!! The
>> table
>> that I query has 400 million rows though.
>> The dataset easily fit in memory, so if I run the same query again it
>> takes
>> less than a second.
>>
>
> Make sure that your database is on a filesystem with zfs block-size
> matching the database block-size (rather than 128K).  Otherwise far more
> data may be read than needed, and likewise, writes may result in writing
> far more data than needed.
>
> Regardless, L2ARC on SSD is a very good idea for this case.
>
> Bob
> --
> Bob Friesenhahn
> bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>



-- 
Olav Grønås Gjerde


More information about the freebsd-fs mailing list