Re: An attempted test of main's "git: 2ad756a6bbb3" "merge openzfs/zfs@95f71c019" that did not go as planned

From: Alexander Motin <mav_at_FreeBSD.org>
Date: Mon, 04 Sep 2023 05:06:46 UTC
Mark,

On 03.09.2023 22:54, Mark Millard wrote:
> After that ^t produced the likes of:
> 
> load: 6.39  cmd: sh 4849 [tx->tx_quiesce_done_cv] 10047.33r 0.51u 121.32s 1% 13004k

So the full state is not "tx->tx", but is actually a 
"tx->tx_quiesce_done_cv", which means the thread is waiting for new 
transaction to be opened, which means some previous to be quiesced and 
then synced.

> #0 0xffffffff80b6f103 at mi_switch+0x173
> #1 0xffffffff80bc0f24 at sleepq_switch+0x104
> #2 0xffffffff80aec4c5 at _cv_wait+0x165
> #3 0xffffffff82aba365 at txg_wait_open+0xf5
> #4 0xffffffff82a11b81 at dmu_free_long_range+0x151

Here it seems like transaction commit is waited due to large amount of 
delete operations, which ZFS tries to spread between separate TXGs.  You 
should probably see some large and growing number in sysctl 
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay .

> #5 0xffffffff829a87d2 at zfs_rmnode+0x72
> #6 0xffffffff829b658d at zfs_freebsd_reclaim+0x3d
> #7 0xffffffff8113a495 at VOP_RECLAIM_APV+0x35
> #8 0xffffffff80c5a7d9 at vgonel+0x3a9
> #9 0xffffffff80c5af7f at vrecycle+0x3f
> #10 0xffffffff829b643e at zfs_freebsd_inactive+0x4e
> #11 0xffffffff80c598cf at vinactivef+0xbf
> #12 0xffffffff80c590da at vput_final+0x2aa
> #13 0xffffffff80c68886 at kern_funlinkat+0x2f6
> #14 0xffffffff80c68588 at sys_unlink+0x28
> #15 0xffffffff8106323f at amd64_syscall+0x14f
> #16 0xffffffff8103512b at fast_syscall_common+0xf8

What we don't see here is what quiesce and sync threads of the pool are 
actually doing.  Sync thread has plenty of different jobs, including 
async write, async destroy, scrub and others, that all may delay each 
other.

Before you rebooted the system, depending how alive it is, could you 
save a number of outputs of `procstat -akk`, or at least specifically 
`procstat -akk | grep txg_thread_enter` if the full is hard?  Or somehow 
else observe what they are doing.

`zpool status`, `zpool get all` and `sysctl -a` would also not harm.

PS: I may be wrong, but USB in "USB3 NVMe SSD storage" makes me shiver. 
Make sure there is no storage problems, like some huge delays, timeouts, 
etc, that can be seen, for example, as busy percents regularly spiking 
far above 100% in your `gstat -spod`.

-- 
Alexander Motin