Upgrading ZFS compression

krad kraduk at gmail.com
Tue Aug 20 07:22:34 UTC 2013


correct, same when you enable dedup as well, only newly writen blocks get
the changes. So its possible a file of multiple blocks may have multiple
compression algorithms applied to it. What I have done in the past is rsync
the tree to a new location, then rename the trees and delete the original.
This isn't always releasable though


On 19 August 2013 18:23, Johan Hendriks <joh.hendriks at gmail.com> wrote:

> Op maandag 19 augustus 2013 schreef Ivan Voras (ivoras at freebsd.org):
>
> > Hello,
> >
> > Just a quick question: if I have a file system with LZJB, write a file
> > on it so it gets compressed, then change the compression setting on the
> > file system to LZ4, will new random writes to the file use the new
> > compression algorithm?
> >
> > By looking at the data structures (dnode_phys_t) it looks like the
> > compression is set per-file object, so no.
> >
> > OTOH, new files on the file system will pick up new compression
> > settings, right?
>
>
>  As far as i know all new files put on the dataset will be compressed using
> the new compression type.
>
> Regards
> Johan
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


More information about the freebsd-fs mailing list