[Bug 229670] ZFS ARC limit vfs.zfs.arc_max from /boot/loader.conf is not respected

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Mon Jul 16 19:20:11 UTC 2018


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229670

--- Comment #7 from Leif Pedersen <leif at ofWilsonCreek.com> ---
The machines I have observed this on vary in zpool sizes.

With regard to the "rule of thumb", one machine which behaves particularly
horribly has a single zpool sized at 256GB. It has only 10GB referenced
(meaning non-snapshot data) and less than 20k inodes. A linear interpretation
of the rule of thumb suggests that just 10MB should be enough ARC, although I
don't expect it to scale down that low. On this one, arc_max is set to 256MB,
but the ARC runs well over 1 GB. I don't know how high it would go if left
alone, since it only has 2 GB of RAM to begin with, so when it gets that big I
have to reboot it. This one is an AWS VM.

For another example, I have a physical machine with 6GB of RAM, with arc_max
set to 256MB and top showing the ARC at 2GB. This one is a bit bigger -- it has
1.4TB across 2 zpools. It does rsync-style backups for three other machines, so
there's a relatively large number of filenames. The second zpool (for the
backups) has roughly 5M inodes with roughly 70-75M filenames (up to 15 names
per inode), with most of its inodes read in a short time span. However, I've
been running this system with these backups on ZFS for years, at least as far
back as FreeBSD 9, without memory problems. While it isn't a huge system, it
was always very stable in the past.

While I don't see this issue on larger machines (with 128GB RAM or more, for
example), I don't believe this is about a minimum memory requirement for a few
reasons. To begin with, the machines are not insanely tiny or running with a
wildly unbalanced disk/ram ratio. Also, if there's a hard minimum requirement,
then sysctl should throw an error. Also, sysctl reports vfs.zfs.arc_meta_limit
at ~67MB on both, which is much lower than arc_max.

However, I retract my remark about it maybe being from a recent update, because
uname on the AWS machine reports 11.1-RELEASE-p4. (I often don't reboot after
updating unless the kernel has a serious vulnerability, and this one has been
up for 109 days.)

Again, mine are 11.1 with the latest patches by freebsd-update. I could try
upgrading to 11.2 if it would be an interesting data point.

>The patch in review is about ARC releasing its cache...

This patch would likely help, particularly since these examples don't have
swap. It seems likely to alleviate my need to meddle with arc_max, which would
be great. However, I'd argue that it's still a bug that arc_max is apparently
completely ignored. And now that I think about that, it's also still bug that
OOM-killing processes is preferred to swap OR evacuating ARC, unless that patch
is fixing that also.

I'd swear I remember that fairly recently, I tried changing arc_max and top
immediately showed the ARC chopped off at the new setting, and if I remember
that right then this is clearly a regression...but details of that memory are
vague at this point.

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-bugs mailing list