Re: The pagedaemon evicts ARC before scanning the inactive page list

From: Kevin Day <>
Date: Tue, 18 May 2021 21:37:22 UTC
I'm not sure if this is the exact same thing, but I believe I'm seeing similar in 12.2-RELEASE as well.

Mem: 5628M Active, 4043M Inact, 8879M Laundry, 12G Wired, 1152M Buf, 948M Free
ARC: 8229M Total, 1010M MFU, 6846M MRU, 26M Anon, 32M Header, 315M Other
     7350M Compressed, 9988M Uncompressed, 1.36:1 Ratio
Swap: 2689M Total, 2337M Used, 352M Free, 86% Inuse

Inact will keep growing, then it will exhaust all swap to the point it's complaining (swap_pager_getswapspace(xx): failed), and never recover until it reboots. ARC will keep shrinking and growing, but inactive grows forever. While it hasn't hit a point it's breaking things since the last reboot, on a bigger server (below) I can watch Inactive slowly grow and never free until it's swapping so badly I have to reboot.

Mem: 9648M Active, 604G Inact, 22G Laundry, 934G Wired, 1503M Buf, 415G Free

> On May 18, 2021, at 4:07 PM, Alan Somers <> wrote:
> I'm using ZFS on servers with tons of RAM and running FreeBSD 12.2-RELEASE.  Sometimes they get into a pathological situation where most of that RAM sits unused.  For example, right now one of them has:
> 2 GB   Active
> 529 GB Inactive
> 16 GB  Free
> 99 GB  ARC total
> 469 GB ARC max
> 86 GB  ARC target
> When a server gets into this situation, it stays there for days, with the ARC target barely budging.  All that inactive memory never gets reclaimed and put to a good use.  Frequently the server never recovers until a reboot.
> I have a theory for what's going on.  Ever since r334508^ the pagedaemon sends the vm_lowmem event _before_ it scans the inactive page list.  If the ARC frees enough memory, then vm_pageout_scan_inactive won't need to free any.  Is that order really correct?  For reference, here's the relevant code, from vm_pageout_worker:
> shortage = pidctrl_daemon(&vmd->vmd_pid, vmd->vmd_free_count);
> if (shortage > 0) {
>         ofree = vmd->vmd_free_count;
>         if (vm_pageout_lowmem() && vmd->vmd_free_count > ofree)
>                 shortage -= min(vmd->vmd_free_count - ofree,
>                     (u_int)shortage);
>         target_met = vm_pageout_scan_inactive(vmd, shortage,
>             &addl_shortage);
> } else
>         addl_shortage = 0
> Raising vfs.zfs.arc_min seems to workaround the problem.  But ideally that wouldn't be necessary.
> -Alan
> ^ <>