From nobody Tue May 18 21:45:18 2021 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 87D2C5C4220 for ; Tue, 18 May 2021 21:45:20 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com [IPv6:2607:f8b0:4864:20::732]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Fl8gN39z5z3pBX; Tue, 18 May 2021 21:45:20 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: by mail-qk1-x732.google.com with SMTP id f18so10852291qko.7; Tue, 18 May 2021 14:45:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=1UMe1PTlwoB8olHfqYviw0+DMc3Cs3WT7GF5tDpI8qo=; b=naXts7UWSUIZDPHWQXqmZXXwn8zqBZwmI1k66sSU1vXzLjnjVsdfkI+RBwxmcdRIHb HFAKD9+x9QQpqz9ueeCvn8eJj2xb+c5vCHTwUeA5J3alyUA2PrHXDpedJUt9NoM7M+E4 4gK6eFUHj2LvqfuSRuCUcdefstOhNMqq6V5JF90wA2o3s4MWBotQyTqwZXOVHEskv/+d crpVQRCpy+2IPKnrCNmUjXR102k8+K5gZ746iSvCAe4E0R4rM7rVHcTms2C7dHRsIENj x9iYVw/LH6r0LpAqpGa6b0X59YuLoEwn6Z5y9xDCziLBhen3B5iYmXF4l8+rTUv7s6F4 kkCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=1UMe1PTlwoB8olHfqYviw0+DMc3Cs3WT7GF5tDpI8qo=; b=pmA1a7X/CMxyHKa/WudDRYYOexKLDV9Ub2RZ2JmaBc9lGQ0DxqsxYB+OikyASWAODf oMHJ4Dd3mU2pRGAt0ldVGfclvSYNtOb/QZ6M82O+6B9bL4y46HzvmkE6UQuJCntlITgL c4qToGCqrJewkuwvYavui6VSdNhYZ1E5sqISWBg8JZ4j2KCFowX4LVRl4AjvRoSOs8Xs W/E3/8b7NhhcjKvg8jk4jk5Ov8U9W4BNc0dcIPf97D9OOi42ghF5J0EIpNx2U7cjRN9P QRmhd0Scv4QUYavCPwFeZL1bBtObV8EzWFIyDBXTtPvWiIPCIjFM1mVX6glaPQpRPqWL 6PqQ== X-Gm-Message-State: AOAM532jgaJUlEfW6VDaHGfya1g446uMTFqETZYqYjnRqLBEvnRG6XLj s56lhWHRwR4RYAvTBRrVbSwzh1XojXs9qA== X-Google-Smtp-Source: ABdhPJzYg5D8VHK5HWiylkCvDHr6ANtOOtUW1jOgWUFvOkSsaCdHnrma776oRZ1mbnyveqs5eYqbBw== X-Received: by 2002:a05:620a:21c5:: with SMTP id h5mr7564917qka.395.1621374319257; Tue, 18 May 2021 14:45:19 -0700 (PDT) Received: from nuc ([142.126.159.38]) by smtp.gmail.com with ESMTPSA id g9sm13851480qka.38.2021.05.18.14.45.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 May 2021 14:45:18 -0700 (PDT) Sender: Mark Johnston Date: Tue, 18 May 2021 17:45:18 -0400 From: Mark Johnston To: Alan Somers Cc: FreeBSD Hackers Subject: Re: The pagedaemon evicts ARC before scanning the inactive page list Message-ID: References: List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 4Fl8gN39z5z3pBX X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] On Tue, May 18, 2021 at 03:07:44PM -0600, Alan Somers wrote: > I'm using ZFS on servers with tons of RAM and running FreeBSD > 12.2-RELEASE. Sometimes they get into a pathological situation where most > of that RAM sits unused. For example, right now one of them has: > > 2 GB Active > 529 GB Inactive > 16 GB Free > 99 GB ARC total > 469 GB ARC max > 86 GB ARC target > > When a server gets into this situation, it stays there for days, with the > ARC target barely budging. All that inactive memory never gets reclaimed > and put to a good use. Frequently the server never recovers until a reboot. > > I have a theory for what's going on. Ever since r334508^ the pagedaemon > sends the vm_lowmem event _before_ it scans the inactive page list. If the > ARC frees enough memory, then vm_pageout_scan_inactive won't need to free > any. Is that order really correct? For reference, here's the relevant > code, from vm_pageout_worker: That was the case even before r334508. Note that prior to that revision vm_pageout_scan_inactive() would trigger vm_lowmem if pass > 0, before scanning the inactive queue. During a memory shortage we have pass > 0. pass == 0 only when the page daemon is scanning the active queue. > shortage = pidctrl_daemon(&vmd->vmd_pid, vmd->vmd_free_count); > if (shortage > 0) { > ofree = vmd->vmd_free_count; > if (vm_pageout_lowmem() && vmd->vmd_free_count > ofree) > shortage -= min(vmd->vmd_free_count - ofree, > (u_int)shortage); > target_met = vm_pageout_scan_inactive(vmd, shortage, > &addl_shortage); > } else > addl_shortage = 0 > > Raising vfs.zfs.arc_min seems to workaround the problem. But ideally that > wouldn't be necessary. vm_lowmem is too primitive: it doesn't tell subscribing subsystems anything about the magnitude of the shortage. At the same time, the VM doesn't know much about how much memory they are consuming. A better strategy, at least for the ARC, would be reclaim memory based on the relative memory consumption of each subsystem. In your case, when the page daemon goes to reclaim memory, it should use the inactive queue to make up ~85% of the shortfall and reclaim the rest from the ARC. Even better would be if the ARC could use the page cache as a second-level cache, like the buffer cache does. Today I believe the ARC treats vm_lowmem as a signal to shed some arbitrary fraction of evictable data. If the ARC is able to quickly answer the question, "how much memory can I release if asked?", then the page daemon could use that to determine how much of its reclamation target should come from the ARC vs. the page cache.