From nobody Thu Mar 05 23:36:20 2026 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4fRmC13xBVz6Tr6L for ; Thu, 05 Mar 2026 23:36:29 +0000 (UTC) (envelope-from markj@freebsd.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "R13" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4fRmC13Fx7z3KMy; Thu, 05 Mar 2026 23:36:29 +0000 (UTC) (envelope-from markj@freebsd.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1772753789; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p7PaXrWwsekvRLEwpNbDEo+Dspah8Hj1a6W5jL9UMuk=; b=QjDI8o+cpBuaC9yF0BRTD4vJMoOYeu6KnoE+J9uvg2nCN82+Bf4CZ28d96H6vG00QcfE6C PidvxzJAwWzQJ/2rMe5W7fvAhrWeYnekeqQ858e5nFot7hQuhfzTj9zxNNKWtm365C+IcH GMBPbSF3UtYjqZx9xGinHUOfAG3hssP4plQNhw1/3k9Kozli9iAN2KCnt7vTQ8vyVrGSQ7 TDvXUVUPoa0TUVJ82KFuK3nGAKGZhXD97/dm9vZhBN82hjviHIW9YdJg2eY6K8L88VCfGe oecS3g01WdLN/dFaFcYGtIl6tnHn3qG3E8FfLSRONqB8m8Ztne4ASq24dGtJzw== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1772753789; a=rsa-sha256; cv=none; b=ils+BRU4vfaFHwodWfua75n1pihbnhqXgsS1FBpJxNAsvWM0GdelQo1+spZSdITx/Ob2FS pl6GtMby5JyoYEOd7Tm4qsQMzkq/9XUH8Rh7ZrVMjX6IM+HWCpvt1/h/4qfRvneqlVy0Z0 U/vrycW3VNTldWIRuMQVxoGCYfGSY5qqZbnWwx5AdKVgEAFJmhFLEu1UZe68HoPBTrIgrC k358fch1h4XihocB+vQRnFO3/VDI5Wc68C3474kt2QetnRhJldp+gAY0aGvp68B6yYJWrk 5GEuxh7e4YOca33QnyesyvBmN9I4BifrbraAcMUv3n/HLnV/UZe2kNxImoLhAg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1772753789; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p7PaXrWwsekvRLEwpNbDEo+Dspah8Hj1a6W5jL9UMuk=; b=TxFBCGysD/7M7CtydMr02Nqkc+Imxp1kl8Sm8JENd6ydGy//nSrjuv/eHO4/aWZWuvzn3e O3Uxpx05mnf2+/uR2XMrJZXR55mlpe/mwSGx9GL9AQuRTuIq18i7MZ/b/YIYlxsv779Mh9 28qb99mINlaEFNOlXN3Uug4WHdckF8XsAQwt68GGDZD7Emr4kXjEfyUujVZG+T+3G4gexw wlXufaz3qaFPe+dev8PAuxqm5QiWlVR1EmJoZ8n1F6keUgRm22CUL43T/5U95Uc+96dyCz yOMdW+K1NbCT6flCoLGezzaniwFFcdbQEADDvnAUH7Wr87rI4RWxbkDvu6Ov3g== Received: from framework (unknown [38.146.206.139]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) (Authenticated sender: markj) by smtp.freebsd.org (Postfix) with ESMTPSA id 4fRmBx6dRnzPCY; Thu, 05 Mar 2026 23:36:25 +0000 (UTC) (envelope-from markj@freebsd.org) Date: Thu, 5 Mar 2026 18:36:20 -0500 From: Mark Johnston To: Alexander Leidinger Cc: Doug Ambrisko , Rick Macklem , Peter Eriksson , FreeBSD CURRENT , Garrett Wollman , Alexander Motin Subject: Re: RFC: How ZFS handles arc memory use Message-ID: References: <22b478c6bad8212c61ca19a983a8e2e4@Leidinger.net> <0d466ee1739ff7ddc967d725453dda35@Leidinger.net> List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <0d466ee1739ff7ddc967d725453dda35@Leidinger.net> On Wed, Mar 04, 2026 at 11:03:42AM +0100, Alexander Leidinger wrote: > Am 2026-03-03 23:45, schrieb Doug Ambrisko: > > On Tue, Mar 03, 2026 at 02:25:11PM -0800, Rick Macklem wrote: > > | On Tue, Mar 3, 2026 at 12:33 PM Doug Ambrisko > > wrote: > > | > > > | > On Sun, Nov 02, 2025 at 11:48:06AM +0100, Alexander Leidinger wrote: > > | > | Am 2025-10-29 22:06, schrieb Doug Ambrisko: > > | > | > It seems around the switch to OpenZFS I would have arc clean > > task > > | > | > running > > | > | > 100% on a core. I use nullfs on my laptop to map my shared ZFS > > /data > > | > | > partiton into a few vnet instances. Over night or so I would > > get into > > | > | > this issue. I found that I had a bunch of vnodes being held by > > other > > | > | > layers. My solution was to reduce kern.maxvnodes and > > vfs.zfs.arc.max so > > | > | > the ARC cache stayed reasonable without killing other > > applications. > > | > | > > > | > | > That is why a while back I added the vnode count to mount -v so > > that > > | > | > I could see the usage of vnodes for each mount point. I made a > > script > > | > | > to report on things: > > | > | > > | > | Do you see this also with the nullfs mount option "nocache"? > > | > > > | > I seems to have run into this issue with nocache > > | > /data/jail/current/usr/local/etc/cups > > /data/jail/current-other/usr/local/etc/cups nullfs rw,nocache 0 0 > > | > /data/jail/current/usr/local/etc/sane.d > > /data/jail/current-other/usr/local/etc/sane.d nullfs rw,nocache 0 0 > > | > /data/jail/current/usr/local/www > > /data/jail/current-other/usr/local/www nullfs rw,nocache 0 0 > > | > /data/jail/current/usr/local/etc/nginx > > /data/jail/current-other/usr/local/etc/nginx nullfs rw,nocache 0 0 > > | > /data/jail/current/tftpboot > > /data/jail/current-other/tftpboot nullfs rw,nocache 0 0 > > | > /data/jail/current/usr/local/lib/grub > > /data/jail/current-other/usr/local/lib/grub nullfs rw,nocache 0 0 > > | > /data/jail > > /data/jail/current-other/data/jail nullfs rw,nocache 0 0 > > | > /data/jail > > /data/jail/current/data/jail nullfs rw,nocache 0 0 > > | > > > | > After a while (a couple of months or more). My laptop was running > > slow > > | > with a high load. The perodic find was running slow. arc_prunee > > was > > | > spinning. When I reduced the number of vnodes then things got > > better. > > | > My vfs.zfs.arc_max is 1073741824 so that I have memory for other > > things. > > | > > > | > nocache does help taking longer to get into this situation. > > | Have any of you guys tried increasing vfs.zfs.arc.free_target? > > | > > | If I understand the code correctly, when freemem < > > vfs.zfs.arc.free_target > > | the reaper thread (the one that does uma_zone_reclaim() to return > > pages > > | to the system from the uma keg that the arc uses) should be activated. > > > > I haven't tried that. I set: > > kern.maxvnodes > > vfs.zfs.arc.min > > vfs.zfs.arc.max > > vfs.zfs.prefetch.disable=1 > > > > I need to make sure kern.maxvnodes is small enough so it doesn't thrash > > when vfs.zfs.arc.max set to 1G. The issues tend to take a while to > > happen. On the plus side I can adjust these when I hit them mostly by > > reducing kern.maxvnodes without having to do a reboot. > > There was this commit recently_ > https://cgit.freebsd.org/src/commit/sys/fs/nullfs?id=8b64d46fab87af3ae062901312187f3a04ad2d67 > > I have not checked if this race condition could result in anything related > to what we see. From the commit message I can not deduct if this could for > example lead to a (even temporary) resource leak which may explain this > behavior. Mark, what is the high-level result of this race condition you > fixed in nullfs? At first look at the commit log I would rather assume > vnodes of the lower FS could rather be freed more early and not at all > because of the race condition. The high-level result would be a lock leak and presumably an eventual deadlock or crash. In an INVARIANTS kernel you'd get an assertion failure. I doubt the bug can be responsible for the issues reported in this thread.