ZFS arc sizing (maybe related to kern/145229)

Rich rincebrain at gmail.com
Thu Apr 8 03:25:58 UTC 2010


kstat.zfs.misc.arcstats.memory_throttle_count: 673016

Since UFS has no files of any reasonable size on it (it's literally
just rootFS)...

- Rich

On Wed, Apr 7, 2010 at 7:31 PM, Adam Nowacki <nowak at xpam.de> wrote:
> check kstat.zfs.misc.arcstats.memory_throttle_count
> This counter is increased every time zfs thinks system is running low on
> memory and will force a write flush and reduce arc size to minimum. Biggest
> problem is that the code is counting only free memory and completely
> ignoring other memory that can be immediately freed like cached files from
> ufs. This is very easy to trigger on mixed ufs and zfs system by just
> reading enough data from ufs to fill its cache, zfs will begin throttling
> and will continue doing so even with no further ufs reads or writes.
>
> Rich wrote:
>>
>> A datapoint for you:
>> Now running 8-STABLE (plus the mbuf leak fix which went in recently),
>> here's my ARC stats and ARC sysctl settings after the server was up
>> for about a week (5 days) after that:
>> ARC Size:
>>        Current Size:                           587.49M (arcsize)
>>        Target Size: (Adaptive)                 587.63M (c)
>>        Min Size (Hard Limit):                  512.00M (arc_min)
>>        Max Size (Hard Limit):                  3072.00M (arc_max)
>>
>> ARC Size Breakdown:
>>        Recently Used Cache Size:       98.28%  577.50M (p)
>>        Frequently Used Cache Size:     1.72%   10.12M (c-p)
>>
>> ARC Efficiency:
>>        Cache Access Total:                     2602789964
>>        Cache Hit Ratio:                96.11%  2501461882
>>        Cache Miss Ratio:               3.89%   101328082
>>        Actual Hit Ratio:               87.65%  2281380527
>>
>> and
>>
>>        vfs.zfs.arc_meta_limit=1073741824
>>        vfs.zfs.arc_meta_used=548265792
>>        vfs.zfs.arc_min=536870912
>>        vfs.zfs.arc_max=3221225472
>>
>> So it very clearly limits to near the minimum size, but whether this
>> is design or accidental behavior, I'm unsure.
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>
>>
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


More information about the freebsd-fs mailing list