Impossible compression ratio on ZFS
jhell at DataIX.net
Mon Jun 13 20:05:38 UTC 2011
On Mon, Jun 13, 2011 at 11:57:22AM +0100, Steven Hartland wrote:
> ----- Original Message -----
> From: "Jeremy Chadwick" <freebsd at jdc.parodius.com>
> > Well-known "quirk"; welcome to ZFS. :-) The following article is long,
> > but if you grab a coffee and read it in full, it'll shed some light on
> > the ordeal:
> > http://www.cuddletech.com/blog/pivot/entry.php?id=983
> > There's also this:
> > http://blog.buttermountain.co.uk/2008/05/10/zfs-compression-when-du-and-ls-appear-to-disagree/
> > This is one of the many reasons I do not use ZFS compression. Not
> > spreading FUD, just saying stuff like this throws users for a loop, case
> > in point.
> I think your miss-understanding my question, its not the fact that its
> showing different sizes from du and ls, that's 100% expected but clearly
> 8million rows of 3 int's can't possibly compress down to 7.5K.
> Having just looked back at the machine, an hour later, the values now
> seem correct with du showing:-
> 278M detail.ibd
> I checked this several times, over what had to be 10mins or more even
> did a flush tables to ensure everything had been written out as far
> as mysql was concerned.
> So it seems that zfs was still processing the file for a good amount of
> time, and during that time was showing incorrect disk usage for said file.
> I'm wondering if the data is some how being processed in l2 arc or
> For reference we're running 8.2-RELEASE, on an areca backed raid6 with
> two ssd drives in l2 arc.
> zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> da0p3 ONLINE 0 0 0
> ada0 ONLINE 0 0 0
> ada1 ONLINE 0 0 0
> errors: No known data errors
> Obviously everything seems to have caught up and is now showing real
> stats but confused as to why it would take quite so long to display
> the real usage via du.
Knowing that there were patches out for v28 on 8.X can you confirm that
in fact you are using v15 ZFS ? I would assume you are because of the
release but I don't want to do that.
If you happen to have patched up to v28 did you turn dedup on.? if so I
would expect the behavior your seeing with the data not being written
If not, then seeing you have compression turned on... did you just dump
that whole table into the database ? its quite possible that the
compression was still happening in ARC before it was finally written out
and this would also explain why that happened.
Also what level of compression are you using ?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 522 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20110613/274cc291/attachment.pgp
More information about the freebsd-fs