After update to r357104 build of poudriere jail fails with 'out of swap space'

Cy Schubert Cy.Schubert at cschubert.com
Mon Jan 27 18:45:53 UTC 2020


On January 27, 2020 5:09:06 AM PST, Cy Schubert <Cy.Schubert at cschubert.com> wrote:
>In message <202001261745.00QHjkuW044006 at gndrsh.dnsmgr.net>, "Rodney W. 
>Grimes"
>writes:
>> > In message <20200125233116.GA49916 at troutmask.apl.washington.edu>,
>Steve 
>> > Kargl w
>> > rites:
>> > > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote:
>> > > > On January 25, 2020 1:52:03 PM PST, Steve Kargl
><sgk at troutmask.apl.wash
>> ingt
>> > > on.edu> wrote:
>> > > > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote:
>> > > > >> 
>> > > > >> It's not just poudeiere. Standard port builds of chromium,
>rust
>> > > > >> and thunderbird also fail on my machines with less than 8
>GB.
>> > > > >>
>> > > > >
>> > > > >Interesting.  I routinely build chromium, rust, firefox,
>> > > > >llvm and few other resource-hunger ports on a i386-freebsd
>> > > > >laptop with 3.4 GB available memory.  This is done with
>> > > > >chrome running with a few tabs swallowing a 1-1.5 GB of
>> > > > >memory.  No issues.  
>> > > > 
>> > > > Number of threads makes a difference too. How many core/threads
>does yo
>> ur l
>> > > aptop have?
>> > >
>> > > 2 cores.
>> > 
>> > This is why.
>> > 
>> > >
>> > > > Reducing number of concurrent threads allowed my builds to
>complete
>> > > > on the 5 GB machine. My build machines have 4 cores, 1 thread
>per
>> > > > core. Reducing concurrent threads circumvented the issue. 
>> > >
>> > > I use portmaster, and AFIACT, it uses 'make -j 2' for the build.
>> > > Laptop isn't doing too much, but an update and browsing.  It does
>> > > take a long time especially if building llvm is required.
>> > 
>> > I use portmaster as well (for quick incidental builds). It uses 
>> > MAKE_JOBS_NUMBER=4 (which is equivalent to make -j 4). I suppose
>machines 
>> > with not enough memory to support their cores with certain builds
>might 
>> > have a better chance of having this problem.
>> > 
>> > MAKE_JOBS_NUMBER_LIMIT to limit a 4 core machine with less than 2
>GB per 
>> > core might be an option. Looking at it this way, instead of an
>extra 3 GB, 
>> > the extra 60% more memory in the other machine makes a big
>difference. A 
>> > rule of thumb would probably be, have ~ 2 GB RAM for every core or
>thread 
>> > when doing large parallel builds.
>>
>> Perhaps we need to redo some boot time calculations, for one the
>> ZFS arch cache, IMHO, is just silly at a fixed percent of total
>> memory.  A high percentage at that.
>>
>> One idea based on what you just said might be:
>>
>> percore_memory_reserve = 2G (Your number, I personally would use 1G
>here)
>> arc_max = MAX(memory size - (Cores * percore_memory_reserve), 512mb)
>>
>> I think that simple change would go a long ways to cutting down the
>> number of OOM reports we see.  ALSO IMHO there should be a way for
>> sub systems to easily tell zfs they are memory pigs too and need to
>> share the space.  Ie, bhyve is horrible if you do not tune zfs arc
>> based on how much memory your using up for VM's.
>>
>> Another formulation might be
>> percore_memory_reserve = alpha * memory_zire / cores
>>
>> Alpha most likely falling in the 0.25 to 0.5 range, I think this one
>> would have better scalability, would need to run some numbers.
>> Probably needs to become non linear above some core count.
>
>Setting a lower arc_max at boot is unlikely to help. Rust was building
>on 
>the 8 GB and 5 GB 4 core machines last night. It completed successfully
>on 
>the 8 GB machine, while using 12 MB of swap. ARC was at 1307 MB.
>
>On the 5 GB 4 core machine the rust build died of OOM. 328 KB swap was 
>used. ARC was reported at 941 MB. arc_min on this machine is 489.2 MB.

MAKE_JOBS_NUMBER=3 worked building rust on the 5  GB 4 core machine. ARC is at 534 MB with 12 MB swap used.


-- 
Pardon the typos and autocorrect, small keyboard in use. 
Cy Schubert <Cy.Schubert at cschubert.com>
FreeBSD UNIX: <cy at FreeBSD.org> Web: https://www.FreeBSD.org

The need of the many outweighs the greed of the few.

Sent from my Android device with K-9 Mail. Please excuse my brevity.


More information about the freebsd-current mailing list