Musings on ZFS Backup strategies

Steven Hartland killing at
Sat Mar 2 23:13:08 UTC 2013

----- Original Message ----- 
From: "Karl Denninger" <karl at>
> Reality however is that the on-disk format of most database files is
> EXTREMELY compressible (often WELL better than 2:1), so I sacrifice
> there.  I think the better option is to stuff a user parameter into the
> filesystem attribute table (which apparently I can do without boundary)
> telling the script whether or not to compress on output so it's not tied
> to the filesystem's compression setting.
> I'm quite-curious, in fact, as to whether the "best practices" really
> are in today's world.  Specifically, for a CPU-laden machine with lots
> of compute power I wonder if enabling compression on the database
> filesystems and leaving the recordsize alone would be a net performance
> win due to the reduction in actual I/O volume.  This assumes you have
> the CPU available, of course, but that has gotten cheaper much faster
> than I/O bandwidth has.

We've been using ZFS compression on mysql filesystems for quite some
time and have good success with it. It is dependent on the HW as
you say though so you need to know where the bottleneck is in your
system, cpu or disk.

mysql 5.6 also added better recordsize support which could be interesting.

Also be aware of the additional latency the compression can add. I'm
also not 100% sure that the compression in ZFS scales beyond one core
its been something I've meant to look in to / test but not got round


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster at

More information about the freebsd-stable mailing list