FreeBSD ports USE_XZ critical issue on low-RAM computers
lasse.collin at tukaani.org
Sat Jun 19 06:05:17 UTC 2010
On 2010-06-18 Matthias Andree wrote:
> I've just had xz break my devel/libtool22 FreeBSD port build on a low
> memory computer (128 MB).
> Reason is that xz by default caps memory use at ~40% of physical RAM
> (this is documented), and skips decompressing files if that doesn't
A snapshot of XZ Utils newer than the 4.999.9beta release has different
default limit (I know that an official release would be nice etc. but
there isn't one now.):
- If 40 % of RAM is at least 80 MiB, 40 % of RAM is used as
- If 80 % of RAM is less than 80 MiB, 80 % of RAM is used as
- Otherwise 80 MiB is used as the limit.
The above avoids the problem on most systems since 80 MiB is enough for
all typical .xz files.
The limit was a problem on Gentoo too. There's still a problem with the
above default limit if you have 16 MiB RAM (that's a real-world
example), so Gentoo put XZ_OPT=--memory=max to their build system. They
too think it is better to let the system be slow and swap very heavily
for an hour or two than refuse decompression in a critical script.
> - This feature is, in my perception, INADEQUATE during decompression.
> If I have a .xz file (downloaded from the Internet) that needs 90 MB
> RAM to decompress, then I need to use those 90 MB no matter if
> that's nice or not, it's just critical.
> I am proposing to Lasse Collin to drop memory capping functionality
> in xz/lzma in decompress mode, and in lzmadec/xzdec.
Naturally the limiter functionality won't be removed, but a different
default value can be considered, including no limit by default.
Would you find no limit OK after xz allocated and used 1 GiB of memory
without a warning after you tried to decompress a relatively big file
you just downloaded on a slightly older system with 512 MiB RAM? I guess
if it is a critical file decompressed by a critical script, you don't
mind it swapping a couple of hours, because you just want it done no
matter how long it takes. But in normal command line use some people
would prefer to get an error first so that it is possible to consider
e.g. using another system to do the decompression (possibly
recompressing with lower settings or with another tool) instead of just
overriding the limit.
One possibility could be to make the limit for decompression e.g. max(80
MiB, 40 % of RAM), since all typical files will decompress with 80 MiB
(you need to use advanced options to create files needing more). That
way also systems with less than 128 MiB RAM would decompress all typical
files by default, possibly slowly with heavy swapping, and systems with
more RAM would be protected from unexpected memory usage of very rarely
occurring .xz files.
Determining a good limit has been quite a bit of a problem for me.
Obviously a DoS protection mechanism shouldn't be a DoS itself.
Disabling the limiter completely by default doesn't seem like an option,
because it would only change who will be annoyed. Comments are very
Lasse Collin | IRC: Larhzu @ IRCnet & Freenode
More information about the freebsd-ports