From nobody Tue Apr 23 22:34:25 2024 X-Original-To: pkg-fallout@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4VPH3k2Pc4z5JCvy for ; Tue, 23 Apr 2024 22:34:26 +0000 (UTC) (envelope-from pkg-fallout@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4VPH3j6p51z53RF for ; Tue, 23 Apr 2024 22:34:25 +0000 (UTC) (envelope-from pkg-fallout@FreeBSD.org) ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1713911665; a=rsa-sha256; cv=none; b=pPI8dqrjV4jRLpZ8vQxXp8yx99otYgzzhcFuBX0zqx6696LhbwA4Y4sk/WH9fZoADiKV+u JMObkxVUHUwyS+I+4aQlN21UfjYVIk5822ExJKRZOpiwPo7rypU8Opl2tBxFsiuD25RzfI R7DtMAsKlBxEtkWWifu60NAp79KlJTgLSqkTkLXe/ZkczDkeGqEB/jNHTnxqiWtqc3A9rf AfpCG9VKg9FnoNg9BQGPiSnlHXW+92FPUgSuM9BOoaXIlxw/hCC+hkOtKFXMmo8yV0yJ7Z VjoS+6sHrn9k4agty3/T0cSoAIxsxerKcAqk8bSLkIKp/rkqIYLy0QY3p8LmMw== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1713911665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc; bh=MXixPNxRlO0D3eLLoopncFaOSDtC5rZuZzgWC38ruTI=; b=Y7gSLHPZ0gssvmaFwHkW0hHFLjuCOahrCohrpm3Il5IYkcCgE1YQ/Kv6HwlyQfR9+I9EGK h/Fs9/mnCiKACyVeJbA7pGk6enyC3fYgzazPuL5Ih8Hh2gdvNbPGvTaBVtthDDu0OS+iTe CPvXyZHQ/Bb49kRnsSySfZnrgwhbIrMpxGt2MWwg/WQCe3HB9/saFFKNPv7Fg8Mw9audMf ZxzrsDi79y4sXd8Pm/9IbCFFVZ2zQSfxC1YovKWSB0v6oj1vfxnETiyCVvgouZl+OEWG6N au64cwzwT0HjrwVyGUUbQfKSYqNi4lNbqIxMxAOlG9YMWz+TQXWeSjb21xdqUg== Received: from ampere3.nyi.freebsd.org (ampere3.nyi.freebsd.org [IPv6:2610:1c1:1:6080::16:26]) by mxrelay.nyi.freebsd.org (Postfix) with ESMTP id 4VPH3j5s1ZzqZ9 for ; Tue, 23 Apr 2024 22:34:25 +0000 (UTC) (envelope-from pkg-fallout@FreeBSD.org) Received: from root (uid 0) (envelope-from pkg-fallout@FreeBSD.org) id ca08 by ampere3.nyi.freebsd.org (DragonFly Mail Agent v0.13+ on ampere3.nyi.freebsd.org); Tue, 23 Apr 2024 22:34:25 +0000 To: pkg-fallout@FreeBSD.org Subject: [package - 140releng-armv7-default][misc/llama-cpp] Failed for llama-cpp-2619 in build Date: Tue, 23 Apr 2024 22:34:25 +0000 Message-Id: <66283771.ca08.2c5db631@ampere3.nyi.freebsd.org> From: List-Id: Fallout logs from package building List-Archive: https://lists.freebsd.org/archives/freebsd-pkg-fallout List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-pkg-fallout@FreeBSD.org You are receiving this mail as a port that you maintain is failing to build on the FreeBSD package build server. Please investigate the failure and submit a PR to fix build. Maintainer: yuri@FreeBSD.org Log URL: https://pkg-status.freebsd.org/ampere3/data/140releng-armv7-default/2910ff97e727/logs/llama-cpp-2619.log Build URL: https://pkg-status.freebsd.org/ampere3/build.html?mastername=140releng-armv7-default&build=2910ff97e727 Log: =>> Building misc/llama-cpp build started at Tue Apr 23 22:32:48 UTC 2024 port directory: /usr/ports/misc/llama-cpp package name: llama-cpp-2619 building for: FreeBSD 140releng-armv7-default-job-06 14.0-RELEASE-p6 FreeBSD 14.0-RELEASE-p6 1400097 arm maintained by: yuri@FreeBSD.org Makefile datestamp: -rw-r--r-- 1 root wheel 722 Apr 11 01:02 /usr/ports/misc/llama-cpp/Makefile Ports top last git commit: 2910ff97e72 Ports top unclean checkout: no Port dir last git commit: 9d446550b4e Port dir unclean checkout: no Poudriere version: poudriere-git-3.4.1-1-g1e9f97d6 Host OSVERSION: 1500006 Jail OSVERSION: 1400097 Job Id: 06 ---Begin Environment--- SHELL=/bin/sh OSVERSION=1400097 UNAME_v=FreeBSD 14.0-RELEASE-p6 1400097 UNAME_r=14.0-RELEASE-p6 BLOCKSIZE=K MAIL=/var/mail/root MM_CHARSET=UTF-8 LANG=C.UTF-8 STATUS=1 HOME=/root PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin MAKE_OBJDIR_CHECK_WRITABLE=0 LOCALBASE=/usr/local USER=root POUDRIERE_NAME=poudriere-git LIBEXECPREFIX=/usr/local/libexec/poudriere POUDRIERE_VERSION=3.4.1-1-g1e9f97d6 MASTERMNT=/usr/local/poudriere/data/.m/140releng-armv7-default/ref LC_COLLATE=C POUDRIERE_BUILD_TYPE=bulk PACKAGE_BUILDING=yes SAVED_TERM= OUTPUT_REDIRECTED_STDERR=4 OUTPUT_REDIRECTED=1 PWD=/usr/local/poudriere/data/.m/140releng-armv7-default/06/.p OUTPUT_REDIRECTED_STDOUT=3 P_PORTS_FEATURES=FLAVORS SUBPACKAGES SELECTED_OPTIONS MASTERNAME=140releng-armv7-default SCRIPTPREFIX=/usr/local/share/poudriere SCRIPTNAME=bulk.sh OLDPWD=/usr/local/poudriere/data/.m/140releng-armv7-default/ref/.p/pool POUDRIERE_PKGNAME=poudriere-git-3.4.1-1-g1e9f97d6 SCRIPTPATH=/usr/local/share/poudriere/bulk.sh POUDRIEREPATH=/usr/local/bin/poudriere ---End Environment--- ---Begin Poudriere Port Flags/Env--- PORT_FLAGS= PKGENV= FLAVOR= MAKE_ARGS= ---End Poudriere Port Flags/Env--- ---Begin OPTIONS List--- ===> The following configuration options are available for llama-cpp-2619: EXAMPLES=on: Build and/or install examples ===> Use 'make config' to modify these settings ---End OPTIONS List--- --MAINTAINER-- yuri@FreeBSD.org --End MAINTAINER-- --CONFIGURE_ARGS-- --End CONFIGURE_ARGS-- --CONFIGURE_ENV-- PYTHON="/usr/local/bin/python3.9" XDG_DATA_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work XDG_CONFIG_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work XDG_CACHE_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work/.cache HOME=/wrkdirs/usr/ports/misc/llama-cpp/work TMPDIR="/tmp" PATH=/wrkdirs/usr/ports/misc/llama-cpp/work/.bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin PKG_CONFIG_LIBDIR=/wrkdirs/usr/ports/misc/llama-cpp/work/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig SHELL=/bin/sh CONFIG_SHELL=/bin/sh --End CONFIGURE_ENV-- --MAKE_ENV-- NINJA_STATUS="[%p %s/%t] " XDG_DATA_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work XDG_CONFIG_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work XDG_CACHE_HOME=/wrkdirs/usr/ports/misc/llama-cpp/work/.cache HOME=/wrkdirs/usr/ports/misc/llama-cpp/work TMPDIR="/tmp" PATH=/wrkdirs/usr/ports/misc/llama-cpp/work/.bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin PKG_CONFIG_LIBDIR=/wrkdirs/usr/ports/misc/llama-cpp/work/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig MK_DEBUG_FILES=no MK_KERNEL_SYMBOLS=no SHELL=/bin/sh NO_LINT=YES DESTDIR=/wrkdirs/usr/ports/misc/llama-cpp/work/stage PREFIX=/usr/local LOCALBASE=/usr/local CC="cc" CFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CPP="cpp" CPPFLAGS="" LDFLAGS=" -pthread -fstack-protector-strong " LIBS="" CXX="c++" CXXFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " BSD_INSTALL_PROGRAM="install -s -m 555" BSD_INSTALL_LIB="install -s -m 0644" BSD_INSTALL_SCRIPT="install -m 555" BSD_INSTALL_DATA="install -m 0644" BSD_INSTALL_MAN="install -m 444" --End MAKE_ENV-- --PLIST_SUB-- PORTEXAMPLES="" EXAMPLES="" NO_EXAMPLES="@comment " CMAKE_BUILD_TYPE="release" PYTHON_INCLUDEDIR=include/python3.9 PYTHON_LIBDIR=lib/python3.9 PYTHON_PLATFORM=freebsd14 PYTHON_SITELIBDIR=lib/python3.9/site-packages PYTHON_SUFFIX=39 PYTHON_EXT_SUFFIX=.cpython-39 PYTHON_VER=3.9 PYTHON_VERSION=python3.9 PYTHON2="@comment " PYTHON3="" OSREL=14.0 PREFIX=%D LOCALBASE=/usr/local RESETPREFIX=/usr/local LIB32DIR=lib DOCSDIR="share/doc/llama-cpp" EXAMPLESDIR="share/examples/llama-cpp" DATADIR="share/llama-cpp" WWWDIR="www/llama-cpp" ETCDIR="etc/llama-cpp" --End PLIST_SUB-- --SUB_LIST-- EXAMPLES="" NO_EXAMPLES="@comment " PYTHON_INCLUDEDIR=/usr/local/include/python3.9 PYTHON_LIBDIR=/usr/local/lib/python3.9 PYTHON_PLATFORM=freebsd14 PYTHON_SITELIBDIR=/usr/local/lib/python3.9/site-packages PYTHON_SUFFIX=39 PYTHON_EXT_SUFFIX=.cpython-39 PYTHON_VER=3.9 PYTHON_VERSION=python3.9 PYTHON2="@comment " PYTHON3="" PREFIX=/usr/local LOCALBASE=/usr/local DATADIR=/usr/local/share/llama-cpp DOCSDIR=/usr/local/share/doc/llama-cpp EXAMPLESDIR=/usr/local/share/examples/llama-cpp WWWDIR=/usr/local/www/llama-cpp ETCDIR=/usr/local/etc/llama-cpp --End SUB_LIST-- ---Begin make.conf--- USE_PACKAGE_DEPENDS=yes BATCH=yes WRKDIRPREFIX=/wrkdirs PORTSDIR=/usr/ports PACKAGES=/packages DISTDIR=/distfiles PACKAGE_BUILDING=yes PACKAGE_BUILDING_FLAVORS=yes #### #### # XXX: We really need this but cannot use it while 'make checksum' does not # try the next mirror on checksum failure. It currently retries the same # failed mirror and then fails rather then trying another. It *does* # try the next if the size is mismatched though. #MASTER_SITE_FREEBSD=yes # Build ALLOW_MAKE_JOBS_PACKAGES with 3 jobs MAKE_JOBS_NUMBER=3 #### Misc Poudriere #### .include "/etc/make.conf.ports_env" GID=0 UID=0 ---End make.conf--- --Resource limits-- cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 524288 stack size (kbytes, -s) 65536 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 89999 open files (-n) 8192 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited socket buffer size (bytes, -b) unlimited pseudo-terminals (-p) unlimited kqueues (-k) unlimited umtx shared locks (-o) unlimited --End resource limits-- =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> License MIT accepted by the user =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 ===> llama-cpp-2619 depends on file: /usr/local/sbin/pkg - not found ===> Installing existing package /packages/All/pkg-1.21.1.pkg [140releng-armv7-default-job-06] Installing pkg-1.21.1... [140releng-armv7-default-job-06] Extracting pkg-1.21.1: .......... done ===> llama-cpp-2619 depends on file: /usr/local/sbin/pkg - found ===> Returning to build of llama-cpp-2619 =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> License MIT accepted by the user ===> Fetching all distfiles required by llama-cpp-2619 for building =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> License MIT accepted by the user ===> Fetching all distfiles required by llama-cpp-2619 for building => SHA256 Checksum OK for ggerganov-llama.cpp-b2619_GH0.tar.gz. => SHA256 Checksum OK for nomic-ai-kompute-4565194_GH0.tar.gz. =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> License MIT accepted by the user ===> Fetching all distfiles required by llama-cpp-2619 for building ===> Extracting for llama-cpp-2619 => SHA256 Checksum OK for ggerganov-llama.cpp-b2619_GH0.tar.gz. => SHA256 Checksum OK for nomic-ai-kompute-4565194_GH0.tar.gz. =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> Patching for llama-cpp-2619 =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 ===> llama-cpp-2619 depends on file: /usr/local/bin/cmake - not found ===> Installing existing package /packages/All/cmake-core-3.28.3.pkg [140releng-armv7-default-job-06] Installing cmake-core-3.28.3... [140releng-armv7-default-job-06] `-- Installing expat-2.6.2... [140releng-armv7-default-job-06] `-- Extracting expat-2.6.2: .......... done [140releng-armv7-default-job-06] `-- Installing jsoncpp-1.9.5... [140releng-armv7-default-job-06] `-- Extracting jsoncpp-1.9.5: .......... done [140releng-armv7-default-job-06] `-- Installing libuv-1.48.0... [140releng-armv7-default-job-06] `-- Extracting libuv-1.48.0: .......... done [140releng-armv7-default-job-06] `-- Installing rhash-1.4.4_1... [140releng-armv7-default-job-06] | `-- Installing gettext-runtime-0.22.5... [140releng-armv7-default-job-06] | | `-- Installing indexinfo-0.3.1... [140releng-armv7-default-job-06] | | `-- Extracting indexinfo-0.3.1: .... done [140releng-armv7-default-job-06] | `-- Extracting gettext-runtime-0.22.5: .......... done [140releng-armv7-default-job-06] `-- Extracting rhash-1.4.4_1: .......... done [140releng-armv7-default-job-06] Extracting cmake-core-3.28.3: .......... done ===> llama-cpp-2619 depends on file: /usr/local/bin/cmake - found ===> Returning to build of llama-cpp-2619 ===> llama-cpp-2619 depends on executable: ninja - not found ===> Installing existing package /packages/All/ninja-1.11.1,2.pkg [140releng-armv7-default-job-06] Installing ninja-1.11.1,2... [140releng-armv7-default-job-06] `-- Installing python39-3.9.18_2... [140releng-armv7-default-job-06] Extracting ninja-1.11.1,2: ........ done ===== Message from python39-3.9.18_2: -- Note that some standard Python modules are provided as separate ports as they require additional dependencies. They are available as: py39-gdbm databases/py-gdbm@py39 py39-sqlite3 databases/py-sqlite3@py39 py39-tkinter x11-toolkits/py-tkinter@py39 ===> llama-cpp-2619 depends on executable: ninja - found ===> Returning to build of llama-cpp-2619 =========================================================================== =================================================== ===== env: USE_PACKAGE_DEPENDS_ONLY=1 USER=root UID=0 GID=0 =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> Configuring for llama-cpp-2619 ===> Performing out-of-source build /bin/mkdir -p /wrkdirs/usr/ports/misc/llama-cpp/work/.build -- The C compiler identification is Clang 16.0.6 -- The CXX compiler identification is Clang 16.0.6 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /wrkdirs/usr/ports/misc/llama-cpp/work/.bin/git -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: armv7 -- ARM detected -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed CMake Warning at common/CMakeLists.txt:29 (message): Git repository not found; to enable automatic generation of build info, make sure Git is installed and the project is a Git repository. -- Configuring done (5.4s) -- Generating done (0.2s) CMake Warning: Manually-specified variables were not used by the project: BOOST_PYTHON_SUFFIX CMAKE_COLOR_MAKEFILE CMAKE_MODULE_LINKER_FLAGS CMAKE_VERBOSE_MAKEFILE FETCHCONTENT_FULLY_DISCONNECTED Python3_EXECUTABLE Python_ADDITIONAL_VERSIONS Python_EXECUTABLE -- Build files have been written to: /wrkdirs/usr/ports/misc/llama-cpp/work/.build =========================================================================== =================================================== ===== env: NO_DEPENDS=yes USER=root UID=0 GID=0 ===> Building for llama-cpp-2619 [ 1% 3/96] /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml-alloc.c.o -MF CMakeFiles/ggml.dir/ggml-alloc.c.o.d -o CMakeFiles/ggml.dir/ggml-alloc.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml-alloc.c [ 2% 4/96] /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml-backend.c.o -MF CMakeFiles/ggml.dir/ggml-backend.c.o.d -o CMakeFiles/ggml.dir/ggml-backend.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml-backend.c [ 3% 5/96] /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml-quants.c.o -MF CMakeFiles/ggml.dir/ggml-quants.c.o.d -o CMakeFiles/ggml.dir/ggml-quants.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml-quants.c [ 4% 6/96] /usr/bin/c++ -DGGML_SCHED_MAX_COPIES=4 -DLLAMA_BUILD -DLLAMA_SHARED -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -Dllama_EXPORTS -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/llama.dir/unicode.cpp.o -MF CMakeFiles/llama.dir/unicode.cpp.o.d -o CMakeFiles/llama.dir/unicode.cpp.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/unicode.cpp [ 5% 7/96] /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml.c.o -MF CMakeFiles/ggml.dir/ggml.c.o.d -o CMakeFiles/ggml.dir/ggml.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c FAILED: CMakeFiles/ggml.dir/ggml.c.o /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml.c.o -MF CMakeFiles/ggml.dir/ggml.c.o.d -o CMakeFiles/ggml.dir/ggml.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:1571:5: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf, sum); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:991:41: note: expanded from macro 'GGML_F16_VEC_REDUCE' #define GGML_F16_VEC_REDUCE GGML_F32Cx4_REDUCE ^ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:981:38: note: expanded from macro 'GGML_F32Cx4_REDUCE' #define GGML_F32Cx4_REDUCE GGML_F32x4_REDUCE ^ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:911:11: note: expanded from macro 'GGML_F32x4_REDUCE' res = GGML_F32x4_REDUCE_ONE(x[0]); \ ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:896:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE' #define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x) ^~~~~~~~~~~~~ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:1619:9: warning: implicit conversion increases floating-point precision: 'float' to 'ggml_float' (aka 'double') [-Wdouble-promotion] GGML_F16_VEC_REDUCE(sumf[k], sum[k]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:991:41: note: expanded from macro 'GGML_F16_VEC_REDUCE' #define GGML_F16_VEC_REDUCE GGML_F32Cx4_REDUCE ^ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:981:38: note: expanded from macro 'GGML_F32Cx4_REDUCE' #define GGML_F32Cx4_REDUCE GGML_F32x4_REDUCE ^ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:911:11: note: expanded from macro 'GGML_F32x4_REDUCE' res = GGML_F32x4_REDUCE_ONE(x[0]); \ ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~ /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c:896:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE' #define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x) ^~~~~~~~~~~~~ fatal error: error in backend: Cannot select: 0x27ff2a00: v4f32 = fmaxnum 0x27220550, 0x26481eb0 0x27220550: v4f32,i32,ch = ARMISD::VLD1_UPD<(load (s128) from %ir.522, align 4)> 0x2649f67c, 0x264819b0, Constant:i32<16>, Constant:i32<1> 0x264819b0: i32,ch = CopyFromReg 0x2649f67c, Register:i32 %191 0x27ff20a0: i32 = Register %191 0x27ff2870: i32 = Constant<16> 0x26480500: i32 = Constant<1> 0x26481eb0: v4f32 = bitcast 0x27fe0320 0x27fe0320: v4i32 = ARMISD::VMOVIMM TargetConstant:i32<0> 0x27eb4140: i32 = TargetConstant<0> In function: ggml_compute_forward_unary PLEASE submit a bug report to https://bugs.freebsd.org/submit/ and include the crash backtrace, preprocessed source, and associated run script. Stack dump: 0. Program arguments: /usr/bin/cc -DGGML_SCHED_MAX_COPIES=4 -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/ggml.dir/ggml.c.o -MF CMakeFiles/ggml.dir/ggml.c.o.d -o CMakeFiles/ggml.dir/ggml.c.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c 1. parser at end of file 2. Code generation 3. Running pass 'Function Pass Manager' on module '/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/ggml.c'. 4. Running pass 'ARM Instruction Selection' on function '@ggml_compute_forward_unary' #0 0x042b3f6c (/usr/bin/cc+0x42b3f6c) #1 0x042b1eb4 (/usr/bin/cc+0x42b1eb4) #2 0x0425f764 (/usr/bin/cc+0x425f764) #3 0x0425f720 (/usr/bin/cc+0x425f720) #4 0x042a25e4 (/usr/bin/cc+0x42a25e4) #5 0x0173ec28 (/usr/bin/cc+0x173ec28) #6 0x04265e20 (/usr/bin/cc+0x4265e20) #7 0x045c5b4c (/usr/bin/cc+0x45c5b4c) #8 0x045c50e0 (/usr/bin/cc+0x45c50e0) #9 0x04a54fc0 (/usr/bin/cc+0x4a54fc0) #10 0x045bd490 (/usr/bin/cc+0x45bd490) #11 0x045bccd0 (/usr/bin/cc+0x45bccd0) #12 0x045bc924 (/usr/bin/cc+0x45bc924) #13 0x045bc338 (/usr/bin/cc+0x45bc338) #14 0x045ba5a4 (/usr/bin/cc+0x45ba5a4) #15 0x04a503f4 (/usr/bin/cc+0x4a503f4) #16 0x03ad62a0 (/usr/bin/cc+0x3ad62a0) #17 0x03f04898 (/usr/bin/cc+0x3f04898) #18 0x03f0a468 (/usr/bin/cc+0x3f0a468) #19 0x03f04e84 (/usr/bin/cc+0x3f04e84) #20 0x01f9da9c (/usr/bin/cc+0x1f9da9c) #21 0x022850bc (/usr/bin/cc+0x22850bc) #22 0x02797710 (/usr/bin/cc+0x2797710) #23 0x021ca634 (/usr/bin/cc+0x21ca634) #24 0x0215e480 (/usr/bin/cc+0x215e480) #25 0x0227ed24 (/usr/bin/cc+0x227ed24) #26 0x0173e488 (/usr/bin/cc+0x173e488) #27 0x0174ba70 (/usr/bin/cc+0x174ba70) #28 0x0201a678 (/usr/bin/cc+0x201a678) #29 0x0425f6fc (/usr/bin/cc+0x425f6fc) #30 0x0201a140 (/usr/bin/cc+0x201a140) #31 0x01fe78e8 (/usr/bin/cc+0x1fe78e8) #32 0x01fe7b70 (/usr/bin/cc+0x1fe7b70) #33 0x01ffe958 (/usr/bin/cc+0x1ffe958) #34 0x0174b1a0 (/usr/bin/cc+0x174b1a0) #35 0x25ce86c0 __libc_start1 (/lib/libc.so.7+0x576c0) cc: error: clang frontend command failed with exit code 70 (use -v to see invocation) FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152) Target: armv7-unknown-freebsd14.0-gnueabihf Thread model: posix InstalledDir: /usr/bin cc: note: diagnostic msg: ******************** PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT: Preprocessed source(s) and associated run script(s) are located at: cc: note: diagnostic msg: /tmp/ggml-3645d4.c cc: note: diagnostic msg: /tmp/ggml-3645d4.sh cc: note: diagnostic msg: ******************** [ 6% 7/96] /usr/bin/c++ -DGGML_SCHED_MAX_COPIES=4 -DLLAMA_BUILD -DLLAMA_SHARED -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -Dllama_EXPORTS -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/llama.dir/unicode-data.cpp.o -MF CMakeFiles/llama.dir/unicode-data.cpp.o.d -o CMakeFiles/llama.dir/unicode-data.cpp.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/unicode-data.cpp [ 7% 7/96] /usr/bin/c++ -DGGML_SCHED_MAX_COPIES=4 -DLLAMA_BUILD -DLLAMA_SHARED -D_XOPEN_SOURCE=600 -D__BSD_VISIBLE -Dllama_EXPORTS -I/wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/. -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -mfpu=neon-fp-armv8 -mno-unaligned-access -funsafe-math-optimizations -pthread -MD -MT CMakeFiles/llama.dir/llama.cpp.o -MF CMakeFiles/llama.dir/llama.cpp.o.d -o CMakeFiles/llama.dir/llama.cpp.o -c /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/llama.cpp /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/llama.cpp:15521:70: warning: format specifies type 'unsigned long' but the argument has type 'size_type' (aka 'unsigned int') [-Wformat] throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); ~~~ ^~~~~~~~~~~~~~~~~~~~~~ %zu /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/llama.cpp:15530:97: warning: format specifies type 'unsigned long' but the argument has type 'size_t' (aka 'unsigned int') [-Wformat] throw std::runtime_error(format("corrupt output buffer (j=%d, output_size=%lu)", j, ctx->output_size)); ~~~ ^~~~~~~~~~~~~~~~ %zu /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/llama.cpp:15557:70: warning: format specifies type 'unsigned long' but the argument has type 'size_type' (aka 'unsigned int') [-Wformat] throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); ~~~ ^~~~~~~~~~~~~~~~~~~~~~ %zu /wrkdirs/usr/ports/misc/llama-cpp/work/llama.cpp-b2619/llama.cpp:15566:97: warning: format specifies type 'unsigned long' but the argument has type 'size_t' (aka 'unsigned int') [-Wformat] throw std::runtime_error(format("corrupt output buffer (j=%d, output_size=%lu)", j, ctx->output_size)); ~~~ ^~~~~~~~~~~~~~~~ %zu 4 warnings generated. ninja: build stopped: subcommand failed. ===> Compilation failed unexpectedly. Try to set MAKE_JOBS_UNSAFE=yes and rebuild before reporting the failure to the maintainer. *** Error code 1 Stop. make: stopped in /usr/ports/misc/llama-cpp