From nobody Mon Sep 04 09:00:43 2023 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RfN0D5pS1z4sPFF for ; Mon, 4 Sep 2023 09:01:00 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic305-19.consmr.mail.gq1.yahoo.com (sonic305-19.consmr.mail.gq1.yahoo.com [98.137.64.82]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4RfN0B4qgdz4NfD for ; Mon, 4 Sep 2023 09:00:58 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=KNopLBbI; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.64.82 as permitted sender) smtp.mailfrom=marklmi@yahoo.com; dmarc=pass (policy=reject) header.from=yahoo.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1693818056; bh=CHX9AuXEC0SsIR9QLqT3UWvm/eN/3krrWUOM4Vd8kRY=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=KNopLBbIW/D4Y9Y4hSvuyb9S0BRxMyidSIbzjumqIG9jdUTZ1R6gvpC0WQkGohOLfB0jMD11kY+Q5on9ZsTbf7tavqi/EoeDOTCXo+5IM3Q3xS5txEXHMFQ0tX6JReoH6rMBQmzKu0oQR2jFrPm1dFZbEr/tZJvtnvGC8YdoOhroKbWBiQzYTB1Z54mWakp86HTrjLt46RNrmMJKQMI0TIPrUr0gegfpkbpzILJZlbqcekroRYNz/ElWwL8R7VBHkfhy3fB30ywLRg7zhOlthmBIuuWDhKGfpoeS3uPOk2WSdE2jlioj54I3BU4XfaN/hMF22eJEeaJRGsb8rIG9Xg== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1693818056; bh=XYtExZGqLnqk4UfLE/l+r+fAZH41bSRiu0ony9SSQBH=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=IDh/eqZ3HmsnZZtkBSCbgVZjVYjRnDScXTvJ23OXrEIOCrGKCHyzoLJ+YtmWux1fBwoq65TIB/sV6uEU4sSfdwl+v2FB+p28KLxkDGVkQ5Qux04lCyeqNkgyQzlWNIeoLOr/ZImF1xrx9r/GDE7LiHxg1CwB5/542pzoVdtUHRNUpre0BbQGtoc0ljb6OcSm4xk1SxWE4tSqr2nbO1e+l+7+6/9oqv0jBlbscrGAJUzG0A5gRXGxKZcNCKgsSqTKNarD3yP0GeHGOhY0aZ0Bj4cYEEje6Npe7xbV1EOiGhFeMuyAhAbN8D3uw6Lvk5zcuD4ZJfNfvm8qBAba7h6qxg== X-YMail-OSG: WtZQID4VM1lIr2uGozkTwOMbhoQmidL3uRk08lps8aCnhx_qi2hzEw0YS83SVga SgnB1P88ZiwsDDINQEurokl374n._TWaQa_qsS0ZsQgby6FTycNVXjCItX19vuFPN6ccyzgTcCWM ftP2lAWLgG4JRE9QoZIgXG6yzmZq7p7jslhvWyhJlgFZd7l.xRyjZtdp15GZlJN0zQSVg2O.qL.C qUvVuymJX8YehbahPxWL8vYgKssx3p2jQs_BbRGLhBAgPZGuIi95BZibUOZwWguj5_ghkj2KLRkP Ggsvp5h7BrvJ_9A_bldCW9IDboSLkdY7zQT0BZuI8mFhfWNqDUVFztLDYK92C5r6E2hwwj4H7.lX BBD2ubTOHY2KaZbZZUnh0JZu_A2FDN4JDukuI3nrzMsurPqaGbYi2405VY_pKOLN7uQFrsHpRVkS i.83rHfrqNbVa.H3yyt5aecuZxCJsa0L5PjAgzqAelnIGwJJmxBuOvMtA_N3nL_9IjcmTYL9Pykf s8XXgTSrkWRwbYitU.syIGv.otgsQHBF6abtxmhy7mn84Y01OSMf89nUMyoLh2ISjzI8.QZrxk.9 t_JfiuWk5xPnVol2NO4uDL6PSrOrsfgcj4Wu6lC_cfVZAz08kJHbaiBJgp3pu12Jzygn_F3mbtKe fIXjIwrSSgLrckArVmXQwn3K7NKe_rd6qPFzCy.UhvAgvwyTy9gG.TIuuANVCqQ8os7bNpDTZsQ6 g1wrHEs97VTtE1G6yJ29EPSud4aa5sypFNxq8biEO1QJ9R9JXL6WZjfC9kSeAmPsolAwoEUd3icI VtFIAKRqDchhTT7lLMAIUzdl8i_smf7NBMRl2yhYYGYu5dCeVImMPwu1fgRRU.tYDHCDAyWsJ031 JrPEtHX2ta8l4IA3Dti8_YD6BgII96t3uYLOAalLBlXE54CLZWjgmnbRMkOXAttE9Cjg14DWLlXk KLHpI7elZ.bY6WVP17Rvhhgj3.03.CG5k2E6y46xe_2TJUAOdWLWErMP_k7N1sxosT2vYMIlW8Sq oIPpnlCK.JrBzq5rTAgpQT2jCOJJ1wUk.7PXZ6PpQdlk0lzTgXS0VPUReE7rTMe6UiQ02EztN9Pz gAePBoEAtufOPHfA39Z8c271Ajz6YorYC6e6seNSW3gP5s.AienwAzy7NbOw1AMcg.hBUAENKz_Q 1YB5j3rWHdw1GsAanP7nqiLalif0v1BzRJoOTpweqAaK94HDcJB4gBD7MwZLeUrUmwaOApSoO9kG 5eX3Z2fRh2qKJDNmvl6DqM_V_3PVIk3Cjdu8ch8fqvxHeUQg5J6_4YYcvZLdI_WkttRg__VAIHPR 6VH7fGC.9axtZ_zzzRsm9_RUpgYA1tq56o7Xq0ykpSV98fcMtBPMin9uqA2DEgpz0xUSG4CUwWmx 4SDTJx81dItkasrBk.IIxmpjfu1E0mqZIR.lv85dh3KUdjCTGKqVnNO6s9Srat1x8GtRuZNp.GD7 ObVYuaSN2FBNZUifAbeAiD6y1RE7FZNEvLLj7xMsbQlD3z01B36.XtLiV9r.1lUM2S8.KPKGN6hS 4_Kl7l.iijAPXhdTDeGvTY.9oGggmKBuqlZuoi98bnZyqkIis9ReYknbBLlXph5Ibcz0RT4kxB7R 0N4nPNstXD4AkGcR1RnTJu22qSPdKVuDcxX5MkTZZIAb5oh7npdzM3mYgup5gKhogUdgVBm5R6V4 I6UKOUMzkmtEeDuZCY0mhONgbwF1gjapzMurbPGapLPDeBB_8.PDAmsYcfY3rO17Xiq9YnmrL1n6 8NetUIc9KWJPJ9PzhxMHqeNcP0Ijj3Yl.Dnvp4SW52ly_CHpWKSl1Mf.GFL1nyjqFEveIDYG7q0P 5UNzsk.ZobIQ1naBJsWzOkz3aIY5wfwHZVagJIzBZeNIkPr5C0oqKNn1FMJz2EhhQHXLnaVbylqb 4UIxwjHMUu5y8EVnQmBul.vkS3MovYq43z_39mEUqMWg9F2BPnpHV.Q6A3CPNMd61a31Npxrwy9T uqkl2WCjW59L80.83f_G7_Xgz0rvTfgaZikFM8rqheT6IWh2cCY7dka8YN_XChyBj4P8T_Tr5SEs n4C9adM5Z67yGl_WnKE3gH_0OBB9f8afEaalDd2QKKn1IIoNgvOTSXNoWIGBj4iQzp6xpbDifXbk 9UjwlfkFfW8VtCIIo5FglZt0DqPZSastJqJrpNOqMqHRLj366JcfxFmcUIYnloFWz6JTCIc7vh7L hZMB5YHObJQUDOZllhPDjuZyuMqOGfXfe_4vo0pv.TZvwvXsbaSrft8yXXdQm5.kFL3Ob7dzwDIi 6Tfo- X-Sonic-MF: X-Sonic-ID: 3b2439b6-fad8-4bac-ad76-261dc8c6f2d7 Received: from sonic.gate.mail.ne1.yahoo.com by sonic305.consmr.mail.gq1.yahoo.com with HTTP; Mon, 4 Sep 2023 09:00:56 +0000 Received: by hermes--production-gq1-6b7c87dcf5-m4lb7 (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID 791c618462618c3e19592c232153a7e3; Mon, 04 Sep 2023 09:00:54 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\)) Subject: Re: An attempted test of main's "git: 2ad756a6bbb3" "merge openzfs/zfs@95f71c019" that did not go as planned From: Mark Millard In-Reply-To: <673A446E-6F94-451E-910F-079F678C5289@yahoo.com> Date: Mon, 4 Sep 2023 02:00:43 -0700 Cc: dev-commits-src-main@freebsd.org, Current FreeBSD Content-Transfer-Encoding: quoted-printable Message-Id: <2BDD30B5-6248-4EC3-83C8-0499E0717D1D@yahoo.com> References: <673A446E-6F94-451E-910F-079F678C5289@yahoo.com> To: Alexander Motin X-Mailer: Apple Mail (2.3731.700.6) X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.50 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-0.996]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; MV_CASE(0.50)[]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[98.137.64.82:from]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MLMMJ_DEST(0.00)[freebsd-current@freebsd.org]; ARC_NA(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.64.82:from]; DKIM_TRACE(0.00)[yahoo.com:+]; FREEMAIL_FROM(0.00)[yahoo.com]; TO_DN_SOME(0.00)[]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; RCVD_COUNT_TWO(0.00)[2] X-Rspamd-Queue-Id: 4RfN0B4qgdz4NfD On Sep 3, 2023, at 23:35, Mark Millard wrote: > On Sep 3, 2023, at 22:06, Alexander Motin wrote: >=20 >>=20 >> On 03.09.2023 22:54, Mark Millard wrote: >>> After that ^t produced the likes of: >>> load: 6.39 cmd: sh 4849 [tx->tx_quiesce_done_cv] 10047.33r 0.51u = 121.32s 1% 13004k >>=20 >> So the full state is not "tx->tx", but is actually a = "tx->tx_quiesce_done_cv", which means the thread is waiting for new = transaction to be opened, which means some previous to be quiesced and = then synced. >>=20 >>> #0 0xffffffff80b6f103 at mi_switch+0x173 >>> #1 0xffffffff80bc0f24 at sleepq_switch+0x104 >>> #2 0xffffffff80aec4c5 at _cv_wait+0x165 >>> #3 0xffffffff82aba365 at txg_wait_open+0xf5 >>> #4 0xffffffff82a11b81 at dmu_free_long_range+0x151 >>=20 >> Here it seems like transaction commit is waited due to large amount = of delete operations, which ZFS tries to spread between separate TXGs. >=20 > That fit the context: cleaning out /usr/local/poudriere/data/.m/ >=20 >> You should probably see some large and growing number in sysctl = kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay . >=20 > After the reboot I started a -J64 example. It has avoided the > early "witness exhausted". Again I ^C'd after about an hours > after the 2nd builder had started. So: again cleaning out > /usr/local/poudriere/data/.m/ Only seconds between: >=20 > # sysctl kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay > kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay: 276042 >=20 > # sysctl kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay > kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay: 276427 >=20 > # sysctl kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay > kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay: 277323 >=20 > # sysctl kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay > kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay: 278027 >=20 > I have found a measure of progress: zfs list's USED > for /usr/local/poudriere/data/.m is decreasing. So > ztop's d/s was a good classification: deletes. >=20 >>> #5 0xffffffff829a87d2 at zfs_rmnode+0x72 >>> #6 0xffffffff829b658d at zfs_freebsd_reclaim+0x3d >>> #7 0xffffffff8113a495 at VOP_RECLAIM_APV+0x35 >>> #8 0xffffffff80c5a7d9 at vgonel+0x3a9 >>> #9 0xffffffff80c5af7f at vrecycle+0x3f >>> #10 0xffffffff829b643e at zfs_freebsd_inactive+0x4e >>> #11 0xffffffff80c598cf at vinactivef+0xbf >>> #12 0xffffffff80c590da at vput_final+0x2aa >>> #13 0xffffffff80c68886 at kern_funlinkat+0x2f6 >>> #14 0xffffffff80c68588 at sys_unlink+0x28 >>> #15 0xffffffff8106323f at amd64_syscall+0x14f >>> #16 0xffffffff8103512b at fast_syscall_common+0xf8 >>=20 >> What we don't see here is what quiesce and sync threads of the pool = are actually doing. Sync thread has plenty of different jobs, including = async write, async destroy, scrub and others, that all may delay each = other. >>=20 >> Before you rebooted the system, depending how alive it is, could you = save a number of outputs of `procstat -akk`, or at least specifically = `procstat -akk | grep txg_thread_enter` if the full is hard? Or somehow = else observe what they are doing. >=20 > # procstat -akk > ~/mmjnk00.txt > # procstat -akk > ~/mmjnk01.txt > # procstat -akk > ~/mmjnk02.txt > # procstat -akk > ~/mmjnk03.txt > # procstat -akk > ~/mmjnk04.txt > # procstat -akk > ~/mmjnk05.txt > # grep txg_thread_enter ~/mmjnk0[0-5].txt > /usr/home/root/mmjnk00.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk00.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk01.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk01.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk02.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk02.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk03.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk03.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk04.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk04.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk05.txt: 6 100881 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 _cv_wait+0x165 = txg_thread_wait+0xeb txg_quiesce_thread+0x144 fork_exit+0x82 = fork_trampoline+0xe=20 > /usr/home/root/mmjnk05.txt: 6 100882 zfskern = txg_thread_enter mi_switch+0x173 sleepq_switch+0x104 = sleepq_timedwait+0x4b _cv_timedwait_sbt+0x188 zio_wait+0x3c9 = dsl_pool_sync+0x139 spa_sync+0xc68 txg_sync_thread+0x2eb fork_exit+0x82 = fork_trampoline+0xe=20 >=20 > (Hopefully that will be a sufficiently useful start.) >=20 >> `zpool status`, `zpool get all` and `sysctl -a` would also not harm. >=20 > It is a very simple zpool configuration: one partition. > I only use it for bectl BE reasons, not the general > range of reasons for using zfs. I created the media with > my normal content, then checkpointed before doing the > git fetch to start to set up the experiment. >=20 > # zpool status > pool: zamd64 > state: ONLINE > status: Some supported and requested features are not enabled on the = pool. > The pool can still be used, but some features are unavailable. > action: Enable all features using 'zpool upgrade'. Once this is done, > the pool may no longer be accessible by software that does not support > the features. See zpool-features(7) for details. > checkpoint: created Sun Sep 3 11:46:54 2023, consumes 2.17M > config: >=20 > NAME STATE READ WRITE CKSUM > zamd64 ONLINE 0 0 0 > gpt/amd64zfs ONLINE 0 0 0 >=20 > errors: No known data errors >=20 > There was also a snapshot in place before I did the > checkpoint operation. >=20 > I deliberately did not use my typical openzfs-2.1-freebsd=20 > for compatibility but used defaults when creating the pool: >=20 > # zpool get all > NAME PROPERTY VALUE = SOURCE > zamd64 size 872G = - > zamd64 capacity 21% = - > zamd64 altroot - = default > zamd64 health ONLINE = - > zamd64 guid 4817074778276814820 = - > zamd64 version - = default > zamd64 bootfs zamd64/ROOT/main-amd64 = local > zamd64 delegation on = default > zamd64 autoreplace off = default > zamd64 cachefile - = default > zamd64 failmode wait = default > zamd64 listsnapshots off = default > zamd64 autoexpand off = default > zamd64 dedupratio 1.00x = - > zamd64 free 688G = - > zamd64 allocated 184G = - > zamd64 readonly off = - > zamd64 ashift 0 = default > zamd64 comment - = default > zamd64 expandsize - = - > zamd64 freeing 0 = - > zamd64 fragmentation 17% = - > zamd64 leaked 0 = - > zamd64 multihost off = default > zamd64 checkpoint 2.17M = - > zamd64 load_guid 17719601284614388220 = - > zamd64 autotrim off = default > zamd64 compatibility off = default > zamd64 bcloneused 0 = - > zamd64 bclonesaved 0 = - > zamd64 bcloneratio 1.00x = - > zamd64 feature@async_destroy enabled = local > zamd64 feature@empty_bpobj active = local > zamd64 feature@lz4_compress active = local > zamd64 feature@multi_vdev_crash_dump enabled = local > zamd64 feature@spacemap_histogram active = local > zamd64 feature@enabled_txg active = local > zamd64 feature@hole_birth active = local > zamd64 feature@extensible_dataset active = local > zamd64 feature@embedded_data active = local > zamd64 feature@bookmarks enabled = local > zamd64 feature@filesystem_limits enabled = local > zamd64 feature@large_blocks enabled = local > zamd64 feature@large_dnode enabled = local > zamd64 feature@sha512 enabled = local > zamd64 feature@skein enabled = local > zamd64 feature@edonr enabled = local > zamd64 feature@userobj_accounting active = local > zamd64 feature@encryption enabled = local > zamd64 feature@project_quota active = local > zamd64 feature@device_removal enabled = local > zamd64 feature@obsolete_counts enabled = local > zamd64 feature@zpool_checkpoint active = local > zamd64 feature@spacemap_v2 active = local > zamd64 feature@allocation_classes enabled = local > zamd64 feature@resilver_defer enabled = local > zamd64 feature@bookmark_v2 enabled = local > zamd64 feature@redaction_bookmarks enabled = local > zamd64 feature@redacted_datasets enabled = local > zamd64 feature@bookmark_written enabled = local > zamd64 feature@log_spacemap active = local > zamd64 feature@livelist enabled = local > zamd64 feature@device_rebuild enabled = local > zamd64 feature@zstd_compress enabled = local > zamd64 feature@draid enabled = local > zamd64 feature@zilsaxattr active = local > zamd64 feature@head_errlog active = local > zamd64 feature@blake3 enabled = local > zamd64 feature@block_cloning enabled = local > zamd64 feature@vdev_zaps_v2 active = local > zamd64 feature@redaction_list_spill disabled = local >=20 > /etc/sysctl.conf does have: >=20 > vfs.zfs.min_auto_ashift=3D12 > vfs.zfs.per_txg_dirty_frees_percent=3D5 >=20 > The vfs.zfs.per_txg_dirty_frees_percent is from prior > Mateusz Guzik help, where after testing the change I > reported: >=20 > Result summary: Seems to have avoided the sustained periods > of low load average activity. Much better for the context. >=20 > But it was for a different machine (aarch64, 8 cores). But > it was for poudriere bulk use. >=20 > Turns out the default of 30 was causing sort of like > what is seen here: I could have presented some of the > information via the small load average figures here. >=20 > (Note: 5 is the old default, 30 is newer. Other contexts > have other problems with 5: no single right setting and > no automated configuration.) >=20 > Other than those 2 items, zfs is untuned (defaults). >=20 > sysctl -a is a lot more output (864930 Bytes) so I'll skip > it for now. >=20 >> PS: I may be wrong, but USB in "USB3 NVMe SSD storage" makes me = shiver. Make sure there is no storage problems, like some huge delays, = timeouts, etc, that can be seen, for example, as busy percents regularly = spiking far above 100% in your `gstat -spod`. >>=20 >=20 > The "gstat -spod" output showed (and shows): around 0.8ms/w to 3ms/w, > mostly at the lower end of the range. < 98%busy, no spikes to > 100%. > It is a previously unused Samsung PSSD T7 Touch. A little more context here: that is for the "kB" figures seen during the cleanup/delete activity. During port builds into packages larger "kB" figures are seen and the ms/w figures will tend to be larger as well. The larger sizes can also lead to reaching somewhat above 100 %busy some of the time. I'll also note that I've ended up doing a lot more write activity exploring than I'd expected. > I was not prepared to replace the content of a PCIe slot's media > or M.2 connection's media for the temporary purpose. No spare > supply for those so no simple swapping for those. =3D=3D=3D Mark Millard marklmi at yahoo.com