Re: 13-STABLE high idprio load gives poor responsiveness and excessive CPU time per task

From: Mark Millard <>
Date: Tue, 27 Feb 2024 05:48:29 UTC
Questions include (generic list for reference,
even if some has been specified):

For /boot/loader.conf (for example) :

What value of sysctl vm.pageout_oom_seq is in use?

This indirectly adjusts the delay before sustained
low free RAM leads to killing processes. Default 12
but 120 is what I use across a wide variety of
systems. More is possible.

For /etc/sysctl.conf :

What values of sysctl vm.swap_enabled and
sysctl vm.swap_idle_enabled are in use? (They work as
a pair.)

Together they can avoid kernel stacks beings swapped out.
(Processes still can page out inactive pages, but not
their kernel stacks.) Processes withe their kernel stacks
swapped out to storage media do not run until the
kernel stacks are swapped back in. Avoiding such for
kernel stacks of processes involved in interacting with
the system can be important ot maintaining control. This
is a big hammer that is not limited to such processes.
Both being 0 is what leads to kernel stacks not being
swapped out.

For /usr/local/etc/poudriere.conf :

What values of the following are in use?


(Some, of course, may still have the default
value so the default value would be the answer
in such cases.)

Also: Other system tmpfs use outside poudriere?

ZFS in use in system even if poudriere has NO_ZFS
set? (Such is likely uncommon but is possible.)

(Other contexts than poudriere could have some
analogous questions.)

For /usr/local/etc/poudriere.d/make.conf (for example) :

What value of the likes of MAKE_JOBS_NUMBER is
in use.

likes of MAKE_JOBS_NUMBER has as context the
number of hardware threads in the context. The
3 load averages (over different time frames)
vs. the hardware threads for the system is
relevant information.

Note: with various examples of package builds that
use 25+ GiBytes of temporary file space, USE_TMPFS
can be highly relevant, as is the RAM space, SWAP
space, and the resultant RAM+SWAP space. But just
the file I/O can be relevant, even if there is
no tmpfs use.

There are questions like: Spinning rust media
usage? (An over-specific but suggestive reference
form the more general subject area.)

Serial console shows a responsiveness problem?
Simple ssh session over local EtherNet? Only if
there is a GUI present, even it is not being
actively used? Only GUI interactions show a
responsiveness problem?

Going in another direction . . .

I'm no ZFS tuning expert but I had performance
problems that I described on the lists and the
person that had increased
vfs.zfs.per_txg_dirty_frees_percent had me try
setting it back to
vfs.zfs.per_txg_dirty_frees_percent=5 . In my
context, the change was very helpful --but, to
me, it was pure magic. My point is more that you
may need judgments from someone with appropriate
internal ZFS knowledge if you are to explore
tuning ZFS. I've no evidence that the specific
setting would be helpful.

There has been a effort to deal with arc_prune
problems/overhead. See:

Mark Millard
marklmi at