From nobody Fri May 23 19:13:37 2025 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4b3vvq6Dhvz5wBpr for ; Fri, 23 May 2025 19:13:43 +0000 (UTC) (envelope-from dclarke@blastwave.org) Received: from mail.oetec.com (mail.oetec.com [108.160.241.186]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature ECDSA (prime256v1) client-digest SHA256) (Client CN "mail.oetec.com", Issuer "E5" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4b3vvq2sL4z3s4Q for ; Fri, 23 May 2025 19:13:43 +0000 (UTC) (envelope-from dclarke@blastwave.org) Authentication-Results: mx1.freebsd.org; none Received: from [172.16.35.3] (pool-99-253-118-250.cpe.net.cable.rogers.com [99.253.118.250]) (authenticated bits=0) by mail.oetec.com (8.17.1/8.17.1) with ESMTPSA id 54NJDbmJ078729 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT); Fri, 23 May 2025 15:13:39 -0400 (EDT) (envelope-from dclarke@blastwave.org) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=blastwave.org; s=default; t=1748027619; bh=+zxkadMXQ0Pm2o7ATlV+9DNwwSNbyxkAUBZguxYDI/g=; h=Date:Subject:To:References:From:In-Reply-To; b=h1RtV6WIJ7c4CM4zrGZ4Kea8tGYKuhl7it7b/FAUtx0q6C68Wl9h9ENVtMtO7uZMu Df7Y30XPq+AUlwwjFhuq+SF0CUMVd7AbjVIQv5GO3x+ruwlM1mGOOjAeuoHKHDH5mP QG5zTtKk1ZB8y7/dK7q0wFjeRmfV3bRHh61n1ldJ1ePIUlhNcfe4q1dMpOXNGDyvnu Isg/GPXIhjO/GukjP/tC7eP39fmhnSVkyyCrUpdGAcIRwUykLvD8d15H43TqtiqXep onJxDRyfoeJPAa5A7g65vVhAeUXzhp4zbR71UC8TcPr1SstwU0bwK6RVTk66FrFWBU OBgbC/3CdXv3w== Message-ID: Date: Fri, 23 May 2025 15:13:37 -0400 List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: Is there a way to tell poudriere to allocate more memory to a pkg build? Content-Language: en-CA To: Mark Millard , FreeBSD Current References: <30CF8464-32D1-4752-865C-1EB1CA9DB4E2.ref@yahoo.com> <30CF8464-32D1-4752-865C-1EB1CA9DB4E2@yahoo.com> From: Dennis Clarke Organization: GENUNIX In-Reply-To: <30CF8464-32D1-4752-865C-1EB1CA9DB4E2@yahoo.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-oetec-MailScanner-Information: Please contact the ISP for more information X-oetec-MailScanner-ID: 54NJDbmJ078729 X-oetec-MailScanner: Found to be clean X-oetec-MailScanner-From: dclarke@blastwave.org X-Spam-Status: No X-Rspamd-Queue-Id: 4b3vvq2sL4z3s4Q X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:812, ipnet:108.160.240.0/20, country:CA] X-Spamd-Bar: ---- On 5/23/25 15:00, Mark Millard wrote: > Dennis Clarke wrote on > Date: Fri, 23 May 2025 17:45:17 UTC : > >> I have been watching qt6-webengine-6.8.3 fail over and over and over >> for some days now and it takes with it a pile of other stuff. >> >> In the log I see this unscripted trash of a message : >> >> [00:05:03] FAILED: v8_context_snapshot.bin >> [00:05:03] /usr/local/bin/python3.11 >> ../../../../../qtwebengine-everywhere-src-6.8.3/src/3rdparty/chromium/build/gn_run_binary.p >> y ./v8_context_snapshot_generator --output_file=v8_context_snapshot.bin >> [00:05:03] >> [00:05:03] >> [00:05:03] # >> [00:05:03] # Fatal error in , line 0 >> [00:05:03] # Oilpan: Out of memory >> [00:05:03] # >> [00:05:03] # > > Way to little context so all I can do is basically form > questions at this point. > Sorry ... I just realized that other people replied to me OFF-LIST and that is not helpful to others. So the machine titan is fairly beefy : titan# titan# uname -apKU FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1 main-n277353-19419d36cf2a: Mon May 19 20:40:28 UTC 2025 root@titan:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 amd64 1500043 1500043 titan# titan# sysctl hw.model hw.model: Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz titan# titan# sysctl hw.ncpu hw.ncpu: 64 titan# titan# sysctl hw.physmem hw.physmem: 549598998528 titan# titan# sysctl hw.freemem sysctl: unknown oid 'hw.freemem' titan# titan# sysctl kstat.zfs.misc.arcstats.memory_free_bytes kstat.zfs.misc.arcstats.memory_free_bytes: 404796436480 titan# sysctl vm.kmem_map_free vm.kmem_map_free: 431405096960 titan# Also plenty of storage and local NVMe stuff etc etc and dual NVidia GPU's that do nothing at all. For now. We ( myself and others ) have already found that the problem was me. No big surprise. USE_TMPFS=yes TMPFS_LIMIT=32 MAX_MEMORY=32 # MAX_FILES=1024 MAX_EXECUTION_TIME=172800 PARALLEL_JOBS=64 PREPARE_PARALLEL_JOBS=64 That was the problem in the poudriere config. I commented out the MAX_MEMORY and TMPFS_LIMIT and then watched as www/qt6-webengine built just fine. Guess the jail needed more than 32G eh? > I assume that you have not explicitly restricted the memory > space for any processes, so that RAM+SWAP is fully available > to everything. If not, you need to report on the details. > Yup .. I had restrictions in place. Those very very few packages are hogs. Just massive running pigs for memory it seems. > How much RAM? How much SWAP space? (So: how much RAM+SWAP?) > (RAM+SWAP does not vary per process tree or per builder, > presuming no deliberate restrictions have been placed.) 512G mem and 32G swap which never gets touched. > > Do you even have "whatever it seems to want" configured > for the RAM+SWAP? (I'm guessing that you do not know that > the "128G" figure is in fact involved.) > I commented out those restrictions. Makes me worry that some other packages will come along and fail because they need 384G of mem or something silly like that. I have been advised ( in the last hour ) that chromium ports generate +40000 source files and such. That is just abusive but the way of the future I am sure. > How many parallel builders are active in the bulk run > as the bulk build approaches the failure? I think 64 max. > > How much RAM+SWAP in use by other builders or other things > on the system as the system progresses to that failure (not > after the failure)? > .... *sigh* The problem was me. > ZFS (and its ARC)? UFS? If ZFS: Any tuning? > No tuning. It just works(tm) and that is ZFS. > Basically: all the significant sources of competing for > RAM+SWAP? > > .... > === > Mark Millard > marklmi at yahoo.com > > It feels like the correct approach is just give everything to the poudriere bulk situation and then watch for flames. No flames? No smoke? Great .. it is working. -- -- Dennis Clarke RISC-V/SPARC/PPC/ARM/CISC UNIX and Linux spoken