From nobody Fri Apr 21 00:38:44 2023 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Q2bHm24klz46qQ9; Fri, 21 Apr 2023 00:39:00 +0000 (UTC) (envelope-from aryeh.friedman@gmail.com) Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Q2bHm0LzPz3CqB; Fri, 21 Apr 2023 00:39:00 +0000 (UTC) (envelope-from aryeh.friedman@gmail.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-ed1-x534.google.com with SMTP id 4fb4d7f45d1cf-504fce3d7fbso1530647a12.2; Thu, 20 Apr 2023 17:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682037537; x=1684629537; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=NRiPPvPmcDu4oa7SMi8Y5zvyhMXMDjrJOXOlsL5m+Gg=; b=oq4OGWsaSfs0aABBvJnsol2KS5ttNHY25KHElDyL4QFLSRCF0JHR+SRkO3RW5najzM g0DBgMT9oA8gTA0aweq9dt092PpZGs9Z3N2QAkf892xCKjw6R5nttm9ZJgPvF55NNik6 koZWyX0qm7f7qtSCJbVjUxRqgkzBeDzHTgHJsf+jHYoJ2ZaTp7FPljFNTosFKtGoUfYo 3avNkm5i4bUMoPQrgm/NRNgWjiBMT32d+JtxK+AxB1pAOCNxOT9pgIXmiyL3XgFKPGpt 00riEy69sNVqnu23kBqyBIhgJS7BDDuau0EewxHM0i2dn40p6xAb8c7bWCFWOLe5y4Pt N9Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682037537; x=1684629537; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NRiPPvPmcDu4oa7SMi8Y5zvyhMXMDjrJOXOlsL5m+Gg=; b=Jipl9DoNrEsT0+SMYclHz/AKuBh7aaMVrHnyInmJJ6yhONTyWXgL5DgY0NX3xG6HvB nRt8Aol5Dg6fDMdH/rP4NNw1esmjBn50tz/Y6OlRfi0Od0czyK8UHxIT8dxULbc9F0ct fE1jAJhCruXVN5coG20Bk3DjHT3nibBnG4pLBkqn+X0c80nPLs/lRYDi6obXkgNKY9L0 5P7ufv3nVCkI2ELG8JcNg6dH5E8GJ6Gjp/jR4D5yUxcOb8DIhLgzzC3BTIIVdhBK4rM6 DhjQeucGUry2RyqhUX2Ck+3/5Xhq25nCgoNc9/BwPVm6Iy/r6V+T0Wx1CnBs64LmuZbS nhow== X-Gm-Message-State: AAQBX9depsgBlc/aRfTKVzTcktMMRcBzYZmpBAGgFpC4Yghbr05Z7LX/ mFWiSKjnNmY84ZZCEjjCdryl7QaZyFffPigU8hGCPxLC05U= X-Google-Smtp-Source: AKy350bA3R9WC8Jk0wlLwi84GaYFQeBLBdEgPzHtyB0pDwA4bcHpmy12w1ENNYU3O97X01lxdMKV9C+Lvs+pXEaUaYA= X-Received: by 2002:a05:6402:1395:b0:508:39e3:58b with SMTP id b21-20020a056402139500b0050839e3058bmr3247087edv.35.1682037536478; Thu, 20 Apr 2023 17:38:56 -0700 (PDT) List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@freebsd.org MIME-Version: 1.0 References: In-Reply-To: From: Aryeh Friedman Date: Thu, 20 Apr 2023 20:38:44 -0400 Message-ID: Subject: Re: Installing openAI's GPT-2 Ada AI Language Model To: Mario Marietto Cc: FreeBSD Mailing List , FreeBSD Mailing List , Yuri Victorovich Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4Q2bHm0LzPz3CqB X-Spamd-Bar: ---- X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US]; TAGGED_FROM(0.00)[] X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-ThisMailContainsUnwantedMimeParts: N On Thu, Apr 20, 2023 at 12:24=E2=80=AFPM Mario Marietto wrote: > > try to copy and paste the commands that you have issued on pastebin...i n= eed to understand the scenario After saving the patch from the bug report to PORT/files and running portmaster -P misc/pytourch (brand new machine except for installing portmaster): c/ATen/UfuncCPUKernel_add.cpp.AVX2.cpp.o -c /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV= X2.cpp In file included from /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV= X2.cpp:1: In file included from /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp:3: In file included from /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/native/ufunc/add= .h:6: In file included from /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function= al.h:3: In file included from /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function= al_base.h:6: In file included from /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec.h:6: In file included from /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256.h:12: /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:253:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_acosf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:256:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_asinf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:259:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_atanf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:280:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_erff8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:283:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_erfcf8_u15); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:300:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_expf8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:303:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_expm1f8_u10); ^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:393:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_logf8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:396:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_log2f8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:399:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_log10f8_u10); ^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:402:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_log1pf8_u10); ^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:406:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_sinf8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:409:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_sinhf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:412:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_cosf8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:415:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_coshf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:447:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_tanf8_u10); ^~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:450:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_tanhf8_u10); ^~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:460:16: error: cannot initialize a parameter of type 'const __m256 (*)(__m256)' with an lvalue of type '__m256 (__m256)': different return type ('const __m256' (vector of 8 'float' values) vs '__m256' (vector of 8 'float' values)) return map(Sleef_lgammaf8_u10); ^~~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49: note: passing argument to parameter 'vop' here Vectorized map(const __m256 (*const vop)(__m256)) const { ^ 18 errors generated. [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1 -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1 -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS -I/usr/ports/misc/pytorch/work/.build/aten/src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src -I/usr/ports/misc/pytorch/work/.build -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1 -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi -I/usr/ports/misc/pytorch/work/.build/third_party/foxi -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0 -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si= ngle_include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/.. -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/.. -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl= ude -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/= eigen -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe -fstack-protector-strong -isystem /usr/local/include -fno-strict-aliasing -isystem /usr/local/include -Wno-deprecated -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls -Wno-error=3Dold-style-cast -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments -fcolor-diagnostics -fdiagnostics-color=3Dalways -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem /usr/local/include -fno-strict-aliasing -isystem /usr/local/include -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations -Wno-missing-braces -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-archi= ve.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-a= rchive.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-ar= chive.cpp.o -c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ= e/input-archive.cpp [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1 -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1 -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS -I/usr/ports/misc/pytorch/work/.build/aten/src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src -I/usr/ports/misc/pytorch/work/.build -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1 -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi -I/usr/ports/misc/pytorch/work/.build/third_party/foxi -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0 -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /src -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si= ngle_include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/.. -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/.. -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl= ude -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/= eigen -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe -fstack-protector-strong -isystem /usr/local/include -fno-strict-aliasing -isystem /usr/local/include -Wno-deprecated -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls -Wno-error=3Dold-style-cast -Wconstant-conversion -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments -fcolor-diagnostics -fdiagnostics-color=3Dalways -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem /usr/local/include -fno-strict-aliasing -isystem /usr/local/include -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations -Wno-missing-braces -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-arch= ive.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-= archive.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-a= rchive.cpp.o -c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ= e/output-archive.cpp ninja: build stopped: subcommand failed. =3D=3D=3D> Compilation failed unexpectedly. Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the failure = to the maintainer. *** Error code 1 Stop. make: stopped in /usr/ports/misc/pytorch > > Il gio 20 apr 2023, 17:51 Aryeh Friedman ha sc= ritto: >> >> On Thu, Apr 20, 2023 at 7:52=E2=80=AFAM Thierry Thomas wrote: >> > >> > Le jeu. 20 avr. 23 =C3=A0 12:53:05 +0200, Aryeh Friedman >> > =C3=A9crivait : >> > >> > > Running without GPU (for now) on a bhyve vm (3 CPU, 2 GB RAM and 100 >> > > GB of disk) which I intend for determining if it is worse going out >> > > and getting the hardware to do GPU. The problem I had was getting >> > > pytorch to work since it appears I have to build it from source and = it >> > > blows up in that build. >> > >> > Have you seen >> > ? >> >> This seems to be true for all OS's I guess I will have to find an >> intel machine... this is as bad as the motivation that led me to do >> petitecloud in the first place (openstack not running on AMD period). >> Is there just no way to run a ANN in pytorch data format in any other >> way that is not python (like Java?!!?) note the tensorflow port >> required pytorch >> >> >> -- >> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org >> --=20 Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org