Re: llvm10 build failure on Rpi3

From: Mark Millard via freebsd-ports <>
Date: Wed, 23 Jun 2021 14:03:42 -0700
On 2021-Jun-23, at 10:43, bob prohaska <fbsd at> wrote:

> On Wed, Jun 23, 2021 at 01:34:55AM -0700, Mark Millard wrote:
>> Not that it helps much, but: 2779096485 == 0xA5A5A5A5
>> It appears that such somehow was involved-in/generated by:
>> [ 24% 1326/5364] cd /wrkdirs/usr/ports/devel/llvm10/work/.build && /wrkdirs/usr/ports/devel/llvm10/work/.build/bin/llvm-tblgen -gen-global-isel -I /wrkdirs/usr/ports/devel/llvm10/work/llvm-10.0.1.src/lib/Target/AMDGPU -I /wrkdirs/usr/ports/devel/llvm10/work/llvm-10.0.1.src/include -I /wrkdirs/usr/ports/devel/llvm10/work/llvm-10.0.1.src/lib/Target /wrkdirs/usr/ports/devel/llvm10/work/llvm-10.0.1.src/lib/Target/AMDGPU/ --write-if-changed -o lib/Target/AMDGPU/ -d lib/Target/AMDGPU/
>> and that lead to the commented out notation in the output, with the "_at_2779096485" listed in the comment as well.
> A Pi4 doing a bulk build of chromium, lxqt and apache has gone far past that
> point building llvm10, suggesting the fault lies somewhere in my setup.

I'm not so sure of that for the 0xA5A5A5A5u value. You run
main [so: 14 at this point]. Is it a debug build? Or a
non-debug build? I expect that 0xA5A5A5A5u has some specific
debug-build potential meaning.

For example, 0xA5u byte values might be the value that newly
allocated memory is initialized to. Looking . . . man jemalloc
(the memory allocator implementation used by FreeBSD) reports:

       opt.junk (const char *) r- [--enable-fill]
           Junk filling. If set to “alloc”, each byte of uninitialized
           allocated memory will be initialized to 0xa5. If set to “free”, all
           deallocated memory will be initialized to 0x5a. If set to “true”,
           both allocated and deallocated memory will be initialized, and if
           set to “false”, junk filling be disabled entirely. This is intended
           for debugging and will impact performance negatively. This option
           is “false” by default unless --enable-debug is specified during
           configuration, in which case it is “true” by default.

So, if you have junk filling enabled, I expect that you ran
into a legitimate defect in the llvm-tblgen in use. Having
Junk Filling disabled might be a workaround.

There is /etc/malloc.conf as a way of controlling the behavior:

ln -s 'junk:false' /usr/local/poudriere/poudriere-system/etc/malloc.conf

I suggest you retry building after getting the above in place.
If it does not get the 0xA5A5A5A5u value, that would be
more evidence of a uninitialized-memory defect in the llvm-tblgen

I do not normally run debug builds and so would not have
run into 0xA5A5A5A5u from Junk Filling of memory allocations.

I'm not sure when I can setup and do a junk filling experiment
(in a debug main build?). But it looks like some independent
compare/contrast activity might be appropriate.

> The instructions you gave for setting up poudriere seemed to work perfectly
> initially, but since that time both world and kernel have been updated
> along with ports. Is it necessary or advisable to alter /usr/local/poudriere,
> either by  update commands or complete replacement? 

I will note that your log file reports:

Host OSVERSION: 1400023
Jail OSVERSION: 1400019

So your jail's OSVERSION is older than the environment
that it is running in. (Unlikely to contribute to the
0xA5A5A5A5u as far as I can tell.) In other words, you
have not updated your:


to 1400023 as far as I can tell.

Separately from that, for poudriere itself:

I do not know if you are using ports-mgmt/poudriere-devel vs.
ports-mgmt/poudriere . But, whichever, it is a port and is
one of the ports that should be built when it has updated
when you update /usr/ports content and should then have its
install be updated via pkg like the other ports.

I list ports-mgmt/poudriere-devel in the file with the other
ports that I list in ~/origins/CA72-origins.txt and I use
that file via -f in the bulk command.

But nothing about these is likely to avoid the 0xA5A5A5A5u
issue that you ran into.

Mark Millard
marklmi at
( went
away in early 2018-Mar)
Received on Wed Jun 23 2021 - 21:03:42 UTC

Original text of this message