ZFS arc_reclaim_needed: better cooperation with pagedaemon

Artem Belevich fbsdlist at src.cx
Tue Aug 24 02:11:01 UTC 2010


Could you try following experiments before and after the patch while
monitoring kstat.zfs.misc.arcstats.size and
vm.stats.vm.v_inactive_count.

First prepare the data.
* You'll need some files totalling around the amount of physical
memory on your box.  Multiple copies of /usr/src should do the trick.
* Place one copy on UFS filesystem and another on ZFS

Experiment #1:
* Prime ARC by tarring dataset on ZFS into /dev/null.
* Now tar both datasets in parallel with output to /dev/null

Previously you would end up with ARC size shrinking down to arc_min.
What I hope to see after the patch is that inactive memory and ARC
reach some sort of equilibrium with neither monopolizing all available
memory.

#Experiment #2:
If equilibrium is reached, try running some application that would
allocate and use about 1/2 of your physical memory.
Something like that perl one-liner used to cause memory shortage, only
a bit less drastic.
perl -e '$x="x"x1_000_000_000';   # this should allocate about 2GB.
Tune the number to suit your system.

Again, in the past ARC would be the one feeing up the memory. Let's
see if inactive list gives up some, too.

--Artem



On Mon, Aug 23, 2010 at 1:44 PM, jhell <jhell at dataix.net> wrote:
> On 08/23/2010 16:42, jhell wrote:
>> On 08/23/2010 03:28, Artem Belevich wrote:
>>> Can anyone test the patch on a system with mix of UFS/ZFS filesystems
>>> and see if the change mitigates or solves the issue with inactive
>>> memory excessively backpressuring ARC.
>>
>> I have a system currently patched up to ZFSv15 and mm@'s metaslab patch
>> running that I can test this on. Throw me a patch and some specific
>> tests and I can have the results back to you in the next 2 days.
>>
>
> Forget the patch 1 line change I can hand type that in. As for specific
> tests, let me know...
>
> --
>
>  jhell,v
>


More information about the freebsd-hackers mailing list