From nobody Tue May 18 21:50:30 2021 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id A14DB5C6E7B for ; Tue, 18 May 2021 21:50:32 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Fl8nN3vC2z3rPV; Tue, 18 May 2021 21:50:32 +0000 (UTC) (envelope-from markjdb@gmail.com) Received: by mail-qk1-x734.google.com with SMTP id o27so10846672qkj.9; Tue, 18 May 2021 14:50:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Lvu0+gjV+xXwsFYtGpwvWgyKv9XlvWwreVs6ILtiHFI=; b=MMFBmg3G40loe5Qe9vmUxe/s1xp5cnjLu4WXWlxP24Uz2I2nmaVfAfBkin4Qs22t8G FLURHLyPR6UDOmDkUYxvM+cLfwD8jE7ZAymfcaD8/FUWX73HaIZF8+nJ2hbv2y7tm7SL 9ZJTJGOeKWYala+saCOY7qtNdl+A261fa8Bxf8o2ztZQ7Bg81GsT6UTZLj9Cnq2L+zA0 46qjGBru5vBXGrfFMQPdc4CEXyUOl+NiKcnyzXbruR+2oknqL2QjBYGlc8taQjbtlD4A C1ImnyWQQHO1i85vC9dIBlqfsZ+nVP9WrimG5m4y+lMPOZgyCYdS1aG+87HDZstonJlL yxTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=Lvu0+gjV+xXwsFYtGpwvWgyKv9XlvWwreVs6ILtiHFI=; b=FrXMTX6d5fxT9Ozb9dkUfmYNyriDIxPglYNVMeyb+vyyhxOwqSnhbPaeU/vTNZ/Odt LLrQ4flO0wpDDMTVUMZ0BaQgtsnU4lpEdY3Zyk+GrGGcMN8/1Y+O7pMchmMG02CACTmy NZNhu65vfSse/HIOQ5Aoo/68i16uMTLVOtK5Ou/wHRPf1h/HFK5dQu3+7G0zUSkG+TrH opG+iKVBmr4c74eN1aqndClNsCHEk2nuitdFY5R81NtbbOvaLNBSRaskj11Mqne5humG Mgoe4U03GDytbc7upjZz8rh3E3/3YnQs6oDj0DenGnPgwmwAXSh91F8Oh/mrWHBYWPko N+zg== X-Gm-Message-State: AOAM532ikq2dA5YWLl26oVjfDQyj27dDBIEjDx7Ff11k/nHkFzks2Dig CUPB6cA7gbfIIi5gyu7mOoWeFa6VQ92SYQ== X-Google-Smtp-Source: ABdhPJzf5zxnLo3K2lUIp0hJ5YWkRI3Q8wanMX6x60Lqkx746gQNVhfOIr+P3+xvD+Dq/n9zcqnkIg== X-Received: by 2002:a37:43c8:: with SMTP id q191mr8087975qka.8.1621374631613; Tue, 18 May 2021 14:50:31 -0700 (PDT) Received: from nuc ([142.126.159.38]) by smtp.gmail.com with ESMTPSA id j30sm4526857qki.60.2021.05.18.14.50.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 May 2021 14:50:31 -0700 (PDT) Sender: Mark Johnston Date: Tue, 18 May 2021 17:50:30 -0400 From: Mark Johnston To: Kevin Day Cc: Alan Somers , FreeBSD Hackers Subject: Re: The pagedaemon evicts ARC before scanning the inactive page list Message-ID: References: <3CF9B306-6006-41F8-A880-0AE3AF240BF6@dragondata.com> List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3CF9B306-6006-41F8-A880-0AE3AF240BF6@dragondata.com> X-Rspamd-Queue-Id: 4Fl8nN3vC2z3rPV X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] On Tue, May 18, 2021 at 04:37:22PM -0500, Kevin Day wrote: > I'm not sure if this is the exact same thing, but I believe I'm seeing similar in 12.2-RELEASE as well. > > Mem: 5628M Active, 4043M Inact, 8879M Laundry, 12G Wired, 1152M Buf, 948M Free > ARC: 8229M Total, 1010M MFU, 6846M MRU, 26M Anon, 32M Header, 315M Other > 7350M Compressed, 9988M Uncompressed, 1.36:1 Ratio > Swap: 2689M Total, 2337M Used, 352M Free, 86% Inuse > > Inact will keep growing, then it will exhaust all swap to the point it's complaining (swap_pager_getswapspace(xx): failed), and never recover until it reboots. ARC will keep shrinking and growing, but inactive grows forever. While it hasn't hit a point it's breaking things since the last reboot, on a bigger server (below) I can watch Inactive slowly grow and never free until it's swapping so badly I have to reboot. > > Mem: 9648M Active, 604G Inact, 22G Laundry, 934G Wired, 1503M Buf, 415G Free This sounds somewhat unrelated. Under memory pressure the kernel will reclaim clean pages from the inactive queue, making them available to other memory consumers like the ARC. Dirty pages in the inactive queue have to be written to stable storage before they may be reclaimed; pages waiting for such treatment show up as "laundry". If swap space is all used up, then the kernel likely has no way to reclaim dirty inactive pages short of killing processes. So the real question is, what's the main source of inactive memory on your servers?