From nobody Mon May 01 00:33:36 2023 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Q8kjC73DFz48Wbh for ; Mon, 1 May 2023 00:33:51 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic311-25.consmr.mail.gq1.yahoo.com (sonic311-25.consmr.mail.gq1.yahoo.com [98.137.65.206]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4Q8kjC0Wqmz4X1x for ; Mon, 1 May 2023 00:33:51 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=XBJX4QLV; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.65.206 as permitted sender) smtp.mailfrom=marklmi@yahoo.com; dmarc=pass (policy=reject) header.from=yahoo.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1682901229; bh=pX018TPo4mKsuDQ3bxYUyNWN8vID7MyjbE1GruXiTWk=; h=From:Subject:Date:References:To:In-Reply-To:From:Subject:Reply-To; b=XBJX4QLVka0i2H66aMH1t8INCWRcaR+krqrKuFUar7QFmE2/qzTSGBtE55qfFvRmIm8v4wcmuCWfTdBSN2Bed3dyyjeOz34xqn2EF8EKRJYf+9CYPTLemvvnPL+fedJvjrewg83AN+laeHdONbC4RwJgiT7dpLYkdvRRBR8jUz7d/K0ujb47RQ0l0QoP13ymqflmZpzHLjzp8LsfpuSwOepmgCaLSUEDCmIkg8S6wHSNtsmqwhAcYSh3ik3NLik6QwYr8NvWFjL1fL7hmKpWd8Sp/VVmDVCkdd4SEZNrKOU2v4bYGgJ8wpuIcQkWT7AEjAqkvfRuEXr4W9xvqcchRw== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1682901229; bh=6HzOao75VAf9SYbPegXArZRt+zqYGDxD2fLHjfu4Mfj=; h=X-Sonic-MF:From:Subject:Date:To:From:Subject; b=n326ccuVJJJiunFrPlwwXtpF9K0kZWx8Ql73IFE5D43cd7ijIAdZyRPjIINGg0xhagU2UWf0yU/tuc0XpyV2TTIm7ghxso4AeyIu9AQ7JJaMEFY8ocnWTOsL5Q7J8wGYPtBTfm00xmM6gu5keWOFZfiIqOyGnVURXK59LAYRS5JuAG5nEwJUku6ANz9OfpXJgPzrl2IACvrqUZ8Na0SM7qHJlDZmKtf3ipMJlwBf5wHGPHWjofeYSt9q0xcEnPCziwdr/5vnZLdenwYPvzDkpgw84TZSpGovm9v+0BgaXf5jZzZKTKUQqztEF7/JTqyomJtWKHalzMtWCtVvQbkFpA== X-YMail-OSG: 5DT5xD4VM1kcuCQLjQ_qJ1FeyMQ_NT2kt1.Wz.M9bGl5ftmn0_LDyDjNVjVVpvm 3bRl5346LGpsTv3soSYhgIe3hfvsl7l2AL54NuiTQbjdgnDQhlh1WMSVwz7VuLVo3UUwE4piWdar 1Y8QgkwtipCHV6KgwX_j_ntOhI7HqLargLuiEiS_yn.UdqM8KNI0owPjlBrM9JOltstUFWRwe7Pw NQ7ecWQbQvryzCd71o5W0F9tmXLW7I4kqJbTYoUvhA6WMxxsnUlbTWWghrs3JME4.x1CufM13pHV cOymR.Q9yJ3jQMPMEb5zsBFgfaRnILITEX3G.b.7S4HJupu31Fnv9g9_.Rxyu6iSs33GAD3ddQNH 6WdsSdQ03AlAMpx._kDZl2vbjBfOyZRHAfJXQeMqHAcrEmVCWbpt1KLpzrdOvKcC2XEzT69TrEz. EYg1IM87jLmtSxJDOGD1PS8rXMVcTeODS8vogDf9pE2t0BGBPT_eDG7TsWQVcBaIdLAys0tlhbii 0dYha7kAIGcCCpLbQxwNPDhuAShzlC4t8p8WKssub5HmwDRssTlOwKSBpUzJwaEVq2NrrEDwlE1r sxBl_JJscPYAqcgGbBYddAHwCzaNbJob8xoLAJDKKanmDtMmiK4QF9_rmfQu0vL4f5KrXNI7cF.h TkTJ8vA4Chp_wf1.vueTjPlLL61Tl4hsxZMc4R8X7_mRhwBa5dBY8mHB4WrDB4hV.LG7h3aJvFI. KYixQgNQJuPkF0JYqCruE44xf4VKBxtl2oNHNx7dzWC1aMXxVZnLvx0kxVM4Sud6XgUJ2Y0_.DLZ bxUTomd_AB0JW7ZKJkH5vpfmMFaocfJgaZSvCgjgX3C9zN4wgcohPMoG1YyxI52vlUqEjlPjgP0Y ecCGgURBp7qhAqZPVu0wXz_Y71vfTeFHTxGozot9SO0r7IgLSKKjTQd3E3099wYiglDFTwmsH8r_ oksa9c_TTJGFLQ_HqVnLSFsOHuOQg08IQ8rV.lI6IUcPDhcDouVgWRcEEdx5EdGehNSiQvITJvVi tNKNJo8oTNYwUL_skdDi59XnFDqLBNgBBx1hBgNTHUtnRytHCtS8YugPacWlIjmOGjIN56wqbSdS QnMLEOr43GrVgq3iSN8D6p99sKEikrKWw_deheg4Ow_LQtOhbXqusYagkhgUil4US75UpMgwhbF_ P9a_xpv2jXnfucDYa2Kt.ClYguCpxjwMmxd2dj1Y.vMgUZ0aSNwIOdZH.Cuaz3TZEpOgZ5zpSdhz H_19Uc9vOIqBLRs6UJJ6PATzEyLd7oA_4fZTI4l0BZJ5.orCFmyRFUQ7fYaBW3fsj3BsRmkI8iK4 HE3N_mAupSfNwvvtDDvD9..Xu0oZo_PPNluCruU_pLxPo4B9LRdWUKizTXz0C3tpDRX3kOC9bfYu _LUfTvXSo1fsIrjA3CVA7yrW1cJu1nc4d02_g85sriFH0X1sbW5q.4KGp.lIt.uLubO.2mGdwVmo 0edS.oxb4kocafy3FRheSntdtPQV0HMF36t6AVWmkj6.vAoVHsAQJoOyJ0TTIIWcXPx4MIOqAQg9 vPl2k_eFww1rRhgi9SkEzwIHHbHdWX79hJt1MHJ.GwfNM30fqGIuQVbIlGVdoi.4Vj4ZA4glZH9O rNjFr54r8wKxlak_aFUvLSISAtxWI2VWVOQZiu0a.wn7bvUtScJ5Qbf4o2zqdmx8ocpx8bTxAJdA cndaat3Be7h9Aa8RQDfMhKof84Q5omi5R1iV5PoQEHOodLxQZUfW1.sMKXGjeDRMGJYtL5GHDg6S vJ8zt81nb0leGVAKx8Gti7HOjhqVc6qcEny6fVryuGktLwYEU8zGhZG_vfCafleoeXI.RGDD7pqK CG5tbYT1oFz3VKj023OQLZSt5TBBwrDeUqdHkJroPmonCIb.ZiG1h6wUuPKheFw82rWeTU0Lhezg ifFVTyW84jwJw62V1_QfUrWanUeJVRGnVhy8gypkQZIpJ9W8lOl8ma00VMN.XcrctJ0jcOgE9l9k eVRk0Y.fJaxZNlk06JiWLQRrbB2HyrRo7UO.satOyeji8UOnrwE0YQTzJSpLxINWJLMVmef3Ui2D _Qhep2AtSKf0pMjp5sRd6IL8iUNY_a310O14d3g1Wr2Svkb5ph8GsaJpRjvw1h6PEmDH.DCfYa7x d0.v244STR3ZTf4Bys7mi9Ic5ULJwh.mnn6izdV51w.Is2q2I5i7HED80Ieh2pO5.K6109TiB15a A3r6f7k_FKkolBifdvwojiiiZChdehGBEM2r1EU4gjVVKVmVojlsFSZpMfv6KOqdF1s5KeOk3j4v vgg-- X-Sonic-MF: X-Sonic-ID: d00b87de-aef3-4224-83b9-56bd9d43a311 Received: from sonic.gate.mail.ne1.yahoo.com by sonic311.consmr.mail.gq1.yahoo.com with HTTP; Mon, 1 May 2023 00:33:49 +0000 Received: by hermes--production-gq1-546798879c-sfj59 (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID 4fbe0c6f94bed2bf88666b843be83362; Mon, 01 May 2023 00:33:47 +0000 (UTC) From: Mark Millard Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.400.51.1.1\)) Subject: Re: armv8.2-A+ tuned FreeBSD kernels vs. poudriere bulk and USB3 media: tx->tx_quiesce_done_cv related blocking of processes? Date: Sun, 30 Apr 2023 17:33:36 -0700 References: <7AE28A5B-109E-4C26-9D70-BCA5D49CD79D@yahoo.com> <02DC03AE-E082-4FB5-AA0D-396F64CC23CB@yahoo.com> To: FreeBSD Hackers , freebsd-arm In-Reply-To: <02DC03AE-E082-4FB5-AA0D-396F64CC23CB@yahoo.com> Message-Id: X-Mailer: Apple Mail (2.3731.400.51.1.1) X-Spamd-Result: default: False [-2.50 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-1.000]; SUBJECT_ENDS_QUESTION(1.00)[]; NEURAL_HAM_SHORT(-1.00)[-0.998]; MV_CASE(0.50)[]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; MIME_GOOD(-0.10)[text/plain]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_IN_DNSWL_NONE(0.00)[98.137.65.206:from]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; TO_DN_ALL(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FREEMAIL_FROM(0.00)[yahoo.com]; MID_RHS_MATCH_FROM(0.00)[]; DKIM_TRACE(0.00)[yahoo.com:+]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; MLMMJ_DEST(0.00)[freebsd-arm@freebsd.org]; RCVD_TLS_LAST(0.00)[]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.65.206:from] X-Rspamd-Queue-Id: 4Q8kjC0Wqmz4X1x X-Spamd-Bar: -- X-ThisMailContainsUnwantedMimeParts: N As the evidence this time is largely independent of the details reported previousy, I'm top posting this. The previous ZFS on USB3 results were based on poudriere using "USE_TMPFS=3Ddata", meaning that almost all file I/O was via ZFS to the USB3 media. The UFS on U2 960GB Optane via USB3 adapter results did not not suffer the reported problems, despite "USE_TMPFS=3Ddata". (I let it run to completion.) But this had both a media and a file system difference. This time the results are for just changing poudriere to use "USE_TMPFS=3Dall" instead but back on the original ZFS on USB3 media. The implication is that the vast majority of the file I/O is not via ZFS to the USB3 media. The context has 32 GiBytes of RAM and about 118 GiBytes of swap/paging space. It would need to page if pet run to completion. Observing, the load average is behaving normally: Things are not stuck waiting. "gstat -spod" indicates the ZFS I/O is not sustained (no paging in swap space use yet). First 1 hr: 262 ports built. But this had both a media and a file system difference again. I'm stopping after this, in order to set up the next just- ZFS experiments. Next experiments: I expect to move the ZFS context to U2 960GB Optane media used with the USB3 adapter and to test both "USE_TMPFS=3Ddata" and "USE_TMPSF=3Dall", probably letting USE_TMPFS=3Dall run to completion. If Optane based USE_TMPFS=3Ddata context still has the problem, even the more performance media would have been not enough to avoid what would then appear to be a ZFS problem, two other file systems not having the problem. The Optane based USE_TMPFS=3Dall context should just handle the paging and more rare ZFS I/O quicker. I do not expect problems for this combination, given the UFS on Optane USB3 results and the partial USE_TMPFS=3Dall non-Optane USB3 results. Even with ZFS working, this likely is the more performant type of context for the Windows Dev Kit 2023, given that I'm leaving Windows 11 Pro in place on the internal media. Hypothesis for the original problem: I wonder if ZFS write activity to the USB3 media was largely blocking the ZFS read activity to the same media, causing lots of things to have to spend much time waiting for data instead of making progress, leading to long periods of low load averages. Older material: On Apr 30, 2023, at 00:50, Mark Millard wrote: > On Apr 29, 2023, at 19:44, Mark Millard wrote: >=20 >> This is based on: main-n262658-b347c2284603-dirty, b347c2284603 >> being from late Apr 28, 2023 UTC. (The "-dirty" is from some >> historical patches that I use.) The build is a non-debug build >> (but with symbols not stripped). World or Kernel had been built >> using: >>=20 >> -mcpu=3Dcortex-a78C+flagm+nofp16fml >>=20 >> just for testing purposes. (Worked nicely for -j8 buildworld >> buildkernel testing for the 4 cortex-a78c's plus 4 cortex-x1c's >> present.) >>=20 >> Monitoring poudriere bulk related activity via top and gstat -spod >> I see a lot of the odd result of one process doing something >> like: >>=20 >> CPU4 4 1:39 99.12% /usr/local/sbin/pkg-static create -f tzst = -r /wrkdirs/usr/ports/devel/cmake-core/work/stage >>=20 >> while other processes sit in the likes of: >>=20 >> tx->tx >> zcq->z >> zcw->z >> zilog- >> select >> wait >>=20 >> But sometimes there is no CPU bound process and the top CPU process = is >> the likes of: >>=20 >> 1.24% [usb{usbus0}] >>=20 >> "gstat -spod" basically shows da0 dedicated to write activity most >> of the time. >>=20 >> After: sysctl kern.tty_info_kstacks=3D1 >> Then using ^T at various times, I see a lot of: >>=20 >> load: 0.48 cmd: sh 93914 [tx->tx_quiesce_done_cv] 7534.91r 11.06u = 22.66s 0% 3800k >> #0 0xffff0000004fd564 at mi_switch+0x104 >> #1 0xffff000000463f40 at _cv_wait+0x120 >> #2 0xffff00000153fa34 at txg_wait_open+0xf4 >> #3 0xffff0000014a40bc at dmu_free_long_range+0x17c >> #4 0xffff000001448254 at zfs_rmnode+0x64 >> #9 0xffff000001455678 at zfs_freebsd_inactive+0x48 >> #10 0xffff0000005fc430 at vinactivef+0x180 >> #11 0xffff0000005fba50 at vput_final+0x200 >> #12 0xffff00000060c4d0 at kern_funlinkat+0x320 >> #13 0xffff00015d6cbbf4 at filemon_wrapper_unlink+0x14 >> #14 0xffff0000008f8514 at do_el0_sync+0x594 >> #15 0xffff0000008d4904 at handle_el0_sync+0x40 >>=20 >> load: 0.34 cmd: sh 93914 [tx->tx_quiesce_done_cv] 7566.69r 11.06u = 22.66s 0% 3800k >> #0 0xffff0000004fd564 at mi_switch+0x104 >> #1 0xffff000000463f40 at _cv_wait+0x120 >> #2 0xffff00000153fa34 at txg_wait_open+0xf4 >> #3 0xffff0000014a40bc at dmu_free_long_range+0x17c >> #4 0xffff000001448254 at zfs_rmnode+0x64 >> #5 0xffff0000014557c4 at zfs_freebsd_reclaim+0x34 >> #6 0xffff000000a1340c at VOP_RECLAIM_APV+0x2c >> #7 0xffff0000005fd6c0 at vgonel+0x450 >> #8 0xffff0000005fde7c at vrecycle+0x9c >> #9 0xffff000001455678 at zfs_freebsd_inactive+0x48 >> #10 0xffff0000005fc430 at vinactivef+0x180 >> #11 0xffff0000005fba50 at vput_final+0x200 >> #12 0xffff00000060c4d0 at kern_funlinkat+0x320 >> #13 0xffff00015d6cbbf4 at filemon_wrapper_unlink+0x14 >> #14 0xffff0000008f8514 at do_el0_sync+0x594 >> #15 0xffff0000008d4904 at handle_el0_sync+0x40 >>=20 >> load: 0.44 cmd: sh 93914 [tx->tx_quiesce_done_cv] 7693.52r 11.24u = 23.08s 0% 3800k >> #0 0xffff0000004fd564 at mi_switch+0x104 >> #1 0xffff000000463f40 at _cv_wait+0x120 >> #2 0xffff00000153fa34 at txg_wait_open+0xf4 >> #3 0xffff0000014a40bc at dmu_free_long_range+0x17c >> #4 0xffff000001448254 at zfs_rmnode+0x64 >> #5 0xffff0000014557c4 at zfs_freebsd_reclaim+0x34 >> #6 0xffff000000a1340c at VOP_RECLAIM_APV+0x2c >> #7 0xffff0000005fd6c0 at vgonel+0x450 >> #8 0xffff0000005fde7c at vrecycle+0x9c >> #9 0xffff000001455678 at zfs_freebsd_inactive+0x48 >> #10 0xffff0000005fc430 at vinactivef+0x180 >> #11 0xffff0000005fba50 at vput_final+0x200 >> #12 0xffff00000060c4d0 at kern_funlinkat+0x320 >> #13 0xffff00015d6cbbf4 at filemon_wrapper_unlink+0x14 >> #14 0xffff0000008f8514 at do_el0_sync+0x594 >> #15 0xffff0000008d4904 at handle_el0_sync+0x40 >>=20 >>=20 >> The system (Windows Dev Kit 2023) has 32 GiBytes of RAM. Example >> output from a top modified to show some "Max[imum]Obs[erved]" >> information: >>=20 >> last pid: 17198; load averages: 0.33, 0.58, 1.06 MaxObs: = 15.49, 8.73, 5.75 = up 0+20:48:10 19:14:49 >> 426 threads: 9 running, 394 sleeping, 1 stopped, 22 waiting, 50 = MaxObsRunning >> CPU: 0.0% user, 0.0% nice, 0.2% system, 0.1% interrupt, 99.7% = idle >> Mem: 282760Ki Active, 7716Mi Inact, 23192Ki Laundry, 22444Mi Wired, = 2780Ki Buf, 848840Ki Free, 2278Mi MaxObsActive, 22444Mi MaxObsWired, = 22752Mi MaxObs(Act+Wir+Lndry) >> ARC: 11359Mi Total, 3375Mi MFU, 5900Mi MRU, 993Mi Anon, 93076Ki = Header, 992Mi Other >> 8276Mi Compressed, 19727Mi Uncompressed, 2.38:1 Ratio >> Swap: 120832Mi Total, 120832Mi Free, 2301Mi = MaxObs(Act+Lndry+SwapUsed), 22752Mi MaxObs(Act+Wir+Lndry+SwapUsed) >>=20 >>=20 >> The poudriere bulk has 8 builders but has ALLOW_MAKE_JOBS=3Dyes >> without any explicit settings for the likes of MAKE_JOBS_NUMBER . >> So it is a configuration that allows a high load average compared >> to the number of hardware threads (here: cores) in the system. >>=20 >>=20 >> I've rebooted to do a test with filemon not loaded at the time >> (here it was loaded from prior buildworld buildkernel activity). >> We will see if it still ends up with such problems. >=20 > It still ends up with things waiting, but the detailed STATE > valiues generally lsited are somewhat different. >=20 > I also tried a chroot into a world from use of -mcpu=3Dcortex-a72 > and got similar results. Suggesting that only the > -mcpu=3Dcortex-a78c+flagm+nofp16fml kernel is required to see the > issues. This even got some examples like: >=20 > load: 1.20 cmd: sh 95560 [tx->tx_quiesce_done_cv] 2022.55r 3.21u = 9.24s 0% 3852k > #0 0xffff0000004fd564 at mi_switch+0x104 > #1 0xffff000000463f40 at _cv_wait+0x120 > #2 0xffff000001518a34 at txg_wait_open+0xf4 > #3 0xffff00000147d0bc at dmu_free_long_range+0x17c > #4 0xffff000001421254 at zfs_rmnode+0x64 > #5 0xffff00000142e7c4 at zfs_freebsd_reclaim+0x34 > #6 0xffff000000a1340c at VOP_RECLAIM_APV+0x2c > #7 0xffff0000005fd6c0 at vgonel+0x450 > #8 0xffff0000005fde7c at vrecycle+0x9c > #9 0xffff00000142e678 at zfs_freebsd_inactive+0x48 > #10 0xffff0000005fc430 at vinactivef+0x180 > #11 0xffff0000005fba50 at vput_final+0x200 > #12 0xffff00015d8ceab4 at null_reclaim+0x154 > #13 0xffff000000a1340c at VOP_RECLAIM_APV+0x2c > #14 0xffff0000005fd6c0 at vgonel+0x450 > #15 0xffff0000005fde7c at vrecycle+0x9c > #16 0xffff00015d8ce8e8 at null_inactive+0x38 > #17 0xffff0000005fc430 at vinactivef+0x180 >=20 > (The chroot use involves null mounts.) >=20 > which is sort of analogous to the filemon related > backtraces I showed earlier. The common part=20 > across the examples looks to be #0..#11: >=20 > #0 0xffff0000004fd564 at mi_switch+0x104 > #1 0xffff000000463f40 at _cv_wait+0x120 > #2 0xffff00000153fa34 at txg_wait_open+0xf4 > #3 0xffff0000014a40bc at dmu_free_long_range+0x17c > #4 0xffff000001448254 at zfs_rmnode+0x64 > #5 0xffff0000014557c4 at zfs_freebsd_reclaim+0x34 > #6 0xffff000000a1340c at VOP_RECLAIM_APV+0x2c > #7 0xffff0000005fd6c0 at vgonel+0x450 > #8 0xffff0000005fde7c at vrecycle+0x9c > #9 0xffff000001455678 at zfs_freebsd_inactive+0x48 > #10 0xffff0000005fc430 at vinactivef+0x180 > #11 0xffff0000005fba50 at vput_final+0x200 >=20 > There were a lot more nanslp examples in all > the alter testing (i.e, those avoided filemon.ko > being loaded). >=20 > Starting from having pkgclean -A'd the ports, the > experiments got about the same number of ports built > as of the end of the 1st hour. >=20 >=20 >=20 > UFS vs. ZFS? Different media types? . . . >=20 > So I decided to create and try a UFS test context > instead of a ZFS one. But the media that was best > to update was a U2 960GB Optane in a USB3 > adapter, something that would perform noticeably > better than my normal USB3 NVMe drives, even with > USB involved. >=20 > This combination maintained reasonable load averages > (instead of having long periods of <1) and finish > building 172 ports in the 1st hour, far more than the > around 83 each time I tried the other device/ZFS > combination. NO evdience of the earlier reported > oddities. >=20 > I should also time a from-scratch buildworld > buildkernel. >=20 > I'll look into setting up another U2 960GB Optane > for use in the USB3 adpater, but with ZFS. That > should help isolate media vs. file system > contributions to the varying behaviors. =3D=3D=3D Mark Millard marklmi at yahoo.com