Re: mongodb - Re: Arm v7 RPi2 -current unresponsive to debugger escape during buildworld

From: Mark Millard <marklmi_at_yahoo.com>
Date: Fri, 07 Nov 2025 18:14:20 UTC
On Nov 7, 2025, at 08:26, Paul Mather <paul@gromit.dlib.vt.edu> wrote:

> On Nov 7, 2025, at 10:55 am, Ronald Klop <ronald-lists@klop.ws> wrote:
> 
>> Hi,
>> 
>> A bit off topic, but as maintainer of mongodb70 I can tell you that I do my test building on a RPI4 with 8GB of memory. It has 16 GB of swap, but that isn't used that much during my build tests.
>> 
>> I do add LDFLAGS+= -Wl,--threads=1 to the build. In my experience the linker is using a lot of memory when multi threaded and at the end of the mongo build you end up with 2 or 3 binaries being linked in parallel if you are unlucky.
>> You can also play with MAKE_JOBS_NUMBER=3 to keep it from running to much in parallel.
>> Of course limiting parallelism makes the duration longer, unless it is swapping so much that sequential compiling without swapping is faster than parallel building with trashing the swap space. Find the sweet spot.
>> 
>> And yes, MongoDB is a monster to compile.
> 
> 
> Many thanks for the MongoDB compilation tips.  An extra complicating factor in my case is I'm building via Poudriere and so the poudriere.conf settings can confound things when it comes to controlling resource usage.  The mongodb70 port has caused me to change my poudriere.conf settings.
> 
> Before I started building mongodb70, I had PARALLEL_JOBS=1; ALLOW_MAKE_JOBS=yes; TMPFS_LIMIT=8; MAX_MEMORY=5; and USE_TMPFS=all.  Now, I have commented out PARALLEL_JOBS=1; ALLOW_MAKE_JOBS=yes; TMPFS_LIMIT=8; and MAX_MEMORY=5, and have USE_TMPFS="wrkdir data".  I also added mongodb70 to TMPFS_BLACKLIST: TMPFS_BLACKLIST="rust mongodb*".
> 
> So, I went from 1 builder with multiple make jobs to multiple builders with just 1 make job.  (Before needing to build ports like rust and mongodb70, I used to have multiple builders with multiple make jobs per builder.)  I've also dialled back a little in what TMPFS can be used for.  Usually, the system runs with 16 GB RAM and 8 GB swap on a 6-core system.  Right now, I have to add extra swap to let mongodb70 build successfully.  I suspect prior TMPFS usage is not helping matters.  Also, I don't know whether MongoDB is using a reproducible build, because ccache doesn't seem to speed things up much for me after a failed build.  That's just a gut feeling, though.
> 
> I can echo your observation that the swap doesn't appear to be used much for most of the build.  It's just when it comes to a certain point where everything explodes and LLVM dies from OOM.  Adding extra swap has got me past that point.


The last time I did a "bulk -ca" test monitoring builder
TMPFS usage for USE_TMPFS=all (no blacklist), the larger
usage builder runs were for:

TMPFS: 39.11 GiB usr/local/ SIZE: 1.85 GiB powerpc64le-rust-bootstrap-1.87.0
TMPFS: 38.77 GiB usr/local/ SIZE: 1.81 GiB powerpc-rust-bootstrap-1.87.0
TMPFS: 38.74 GiB usr/local/ SIZE: 1.84 GiB powerpc64-rust-bootstrap-1.87.0
TMPFS: 38.61 GiB usr/local/ SIZE: 1.89 GiB aarch64-rust-bootstrap-1.87.0
TMPFS: 37.13 GiB usr/local/ SIZE: 1.74 GiB armv7-rust-bootstrap-1.87.0
TMPFS: 36.61 GiB usr/local/ SIZE: 1.75 GiB i386-rust-bootstrap-1.87.0
TMPFS: 35.93 GiB usr/local/ SIZE: 4.18 GiB electron35-35.6.0
TMPFS: 35.29 GiB usr/local/ SIZE: 0.32 GiB rust-nightly-1.90.0.20250624
TMPFS: 34.01 GiB usr/local/ SIZE: 0.32 GiB rust-1.87.0
TMPFS: 31.48 GiB usr/local/ SIZE: 4.43 GiB iridium-browser-2025.06.137.3
TMPFS: 31.05 GiB usr/local/ SIZE: 1.34 GiB clickhouse-22.1.3.7
TMPFS: 30.84 GiB usr/local/ SIZE: 0.35 GiB gcc-arm-embedded-14.2r1_1
TMPFS: 27.01 GiB usr/local/ SIZE: 1.51 GiB mongodb70-7.0.21_1
TMPFS: 26.35 GiB usr/local/ SIZE: 4.43 GiB ungoogled-chromium-137.0.7151.103
TMPFS: 23.98 GiB usr/local/ SIZE: 1.89 GiB amd64-rust-bootstrap-1.87.0
TMPFS: 22.55 GiB usr/local/ SIZE: 2.53 GiB linux-ai-ml-env-1.0.0
TMPFS: 16.66 GiB usr/local/ SIZE: 2.75 GiB 0ad-0.27.0_9
TMPFS: 15.31 GiB usr/local/ SIZE: 3.48 GiB deno-2.2.9_1
TMPFS: 15.28 GiB usr/local/ SIZE: 4.08 GiB thunderbird-140.0_1
TMPFS: 14.69 GiB usr/local/ SIZE: 4.08 GiB librewolf-139.0.4
TMPFS: 14.65 GiB usr/local/ SIZE: 0.28 GiB grafana-12.0.2
TMPFS: 14.64 GiB usr/local/ SIZE: 4.14 GiB tor-browser-14.5.4
TMPFS: 14.44 GiB usr/local/ SIZE: 4.08 GiB thunderbird-esr-128.11.1
TMPFS: 13.93 GiB usr/local/ SIZE: 4.08 GiB firefox-140.0.2,2
TMPFS: 13.81 GiB usr/local/ SIZE: 4.18 GiB gstreamer1-plugins-rust-0.13.6
TMPFS: 13.71 GiB usr/local/ SIZE: 0.67 GiB llvm-devel-21.0.d20250403
TMPFS: 13.68 GiB usr/local/ SIZE: 2.04 GiB xtensa-esp-elf-13.2.0.20240530_8
TMPFS: 13.63 GiB usr/local/ SIZE: 3.24 GiB qt6-webengine-6.9.1
TMPFS: 13.46 GiB usr/local/ SIZE: 4.08 GiB waterfox-6.5.9_1,1
TMPFS: 13.09 GiB usr/local/ SIZE: 0.62 GiB alloy-1.6.1_3
TMPFS: 13.04 GiB usr/local/ SIZE: 4.08 GiB firefox-esr-128.12.0,1
TMPFS: 12.78 GiB usr/local/ SIZE: 0.28 GiB awslim-0.4.0
TMPFS: 12.52 GiB usr/local/ SIZE: 0.28 GiB telegraf-1.35.1
TMPFS: 12.50 GiB usr/local/ SIZE: 0.11 GiB texlive-docs-20250308
TMPFS: 12.47 GiB usr/local/ SIZE: 0.33 GiB ghc96-9.6.7
TMPFS: 12.17 GiB usr/local/ SIZE: 0.06 GiB nerd-fonts-3.3.0
TMPFS: 12.12 GiB usr/local/ SIZE: 0.33 GiB ghc94-9.4.8_1
TMPFS: 11.92 GiB usr/local/ SIZE: 0.28 GiB vault-1.19.5
TMPFS: 11.78 GiB usr/local/ SIZE: 0.33 GiB ghc92-9.2.8_1
TMPFS: 11.35 GiB usr/local/ SIZE: 0.28 GiB grafana-loki-2.9.2_13
TMPFS: 11.01 GiB usr/local/ SIZE: 3.31 GiB virtualbox-ose-71-7.1.10_1
TMPFS: 10.92 GiB usr/local/ SIZE: 0.28 GiB trivy-0.63.0_1
TMPFS: 10.38 GiB usr/local/ SIZE: 0.28 GiB vuls-0.33.1
TMPFS: 10.29 GiB usr/local/ SIZE: 0.67 GiB llvm20-20.1.6
TMPFS: 10.27 GiB usr/local/ SIZE: 0.67 GiB llvm19-19.1.7_1
TMPFS: 10.24 GiB usr/local/ SIZE: 0.67 GiB llvm18-18.1.8_2
TMPFS: 10.15 GiB usr/local/ SIZE: 1.75 GiB ringrtc-2.53.0
. . . Below 10 GiBytes omitted here . . .

It was an AMD64 context.

So mongod70 was a little over 27 GiBytes for TMPFS and
rust was a little over 34 GiBytes. So for a bad relative
timing: a little over 61 GiBytes possible contribution
to TMPFS use with both building at the same time.

Of course, without TMPFS use it is normal storage media
use instead of RAM+SWAP.

Such things have lead me use
MUTUALLY_EXCLUSIVE_BUILD_PACKAGES as part of managing
RAM+SWAP use/competition and/or file system space,
even if TMPFS_BLACKLIST is in use for some things.

(I also build some port-packages on system configurations
that have no chance of being able to handle USE_TMPFS=all
or the like without TMPFS_BLACKLIST and such.)

Also, there are some port-packages that TMPFS_BLACKLIST
does not help nearly as much and so still use significant
file system space when listed in TMPFS_BLACKLIST .


===
Mark Millard
marklmi at yahoo.com