From nobody Thu Oct 21 22:31:36 2021 X-Original-To: dev-commits-src-branches@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 84C39181353E; Thu, 21 Oct 2021 22:31:38 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Hb2Jn0j0Xz4fKs; Thu, 21 Oct 2021 22:31:37 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 459D0233C3; Thu, 21 Oct 2021 22:31:36 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 19LMVaM1018837; Thu, 21 Oct 2021 22:31:36 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 19LMVaFo018836; Thu, 21 Oct 2021 22:31:36 GMT (envelope-from git) Date: Thu, 21 Oct 2021 22:31:36 GMT Message-Id: <202110212231.19LMVaFo018836@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: Alexander Motin Subject: git: b5919ea4e6bb - stable/13 - x86: Add NUMA nodes into CPU topology. List-Id: Commits to the stable branches of the FreeBSD src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-branches List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-branches@freebsd.org X-BeenThere: dev-commits-src-branches@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: mav X-Git-Repository: src X-Git-Refname: refs/heads/stable/13 X-Git-Reftype: branch X-Git-Commit: b5919ea4e6bb21e22484bc1943665e1f7e3bf888 Auto-Submitted: auto-generated X-ThisMailContainsUnwantedMimeParts: N The branch stable/13 has been updated by mav: URL: https://cgit.FreeBSD.org/src/commit/?id=b5919ea4e6bb21e22484bc1943665e1f7e3bf888 commit b5919ea4e6bb21e22484bc1943665e1f7e3bf888 Author: Alexander Motin AuthorDate: 2021-09-23 17:41:02 +0000 Commit: Alexander Motin CommitDate: 2021-10-21 22:24:36 +0000 x86: Add NUMA nodes into CPU topology. Depending on hardware, NUMA nodes may match last level caches, or they may be above them (AMD Zen 2/3) or below (Intel Xeon w/ SNC). This information is provided by ACPI instead of CPUID, and it is provided for each CPU individually instead of mask widths, but this code should be able to properly handle all the above cases. This change should immediately allow idle stealing in sched_ule(4) to prefer load from NUMA-local CPUs to remote ones when the node does not match LLC. Later we may think of how to better handle it on sched_pickcpu() side. MFC after: 1 month (cherry picked from commit ef50d5fbc39fc39970eab1234222b5ac1d9ba74c) --- sys/kern/sched_ule.c | 2 ++ sys/sys/smp.h | 1 + sys/x86/x86/mp_x86.c | 82 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 72 insertions(+), 13 deletions(-) diff --git a/sys/kern/sched_ule.c b/sys/kern/sched_ule.c index d92436f70db2..98c1f0bca981 100644 --- a/sys/kern/sched_ule.c +++ b/sys/kern/sched_ule.c @@ -3098,6 +3098,8 @@ sysctl_kern_sched_topology_spec_internal(struct sbuf *sb, struct cpu_group *cg, sbuf_printf(sb, "THREAD group"); if ((cg->cg_flags & CG_FLAG_SMT) != 0) sbuf_printf(sb, "SMT group"); + if ((cg->cg_flags & CG_FLAG_NODE) != 0) + sbuf_printf(sb, "NUMA node"); sbuf_printf(sb, "\n"); } diff --git a/sys/sys/smp.h b/sys/sys/smp.h index cee1199015a7..1da557212ae2 100644 --- a/sys/sys/smp.h +++ b/sys/sys/smp.h @@ -107,6 +107,7 @@ typedef struct cpu_group *cpu_group_t; #define CG_FLAG_HTT 0x01 /* Schedule the alternate core last. */ #define CG_FLAG_SMT 0x02 /* New age htt, less crippled. */ #define CG_FLAG_THREAD (CG_FLAG_HTT | CG_FLAG_SMT) /* Any threading. */ +#define CG_FLAG_NODE 0x04 /* NUMA node. */ /* * Convenience routines for building and traversing topologies. diff --git a/sys/x86/x86/mp_x86.c b/sys/x86/x86/mp_x86.c index db40aab28ad5..326b6fdae77d 100644 --- a/sys/x86/x86/mp_x86.c +++ b/sys/x86/x86/mp_x86.c @@ -27,6 +27,7 @@ #include __FBSDID("$FreeBSD$"); +#include "opt_acpi.h" #ifdef __i386__ #include "opt_apic.h" #endif @@ -82,6 +83,11 @@ __FBSDID("$FreeBSD$"); #include #include +#ifdef DEV_ACPI +#include +#include +#endif + static MALLOC_DEFINE(M_CPUS, "cpus", "CPU items"); /* lock region used by kernel profiling */ @@ -501,13 +507,16 @@ topo_probe(void) int type; int subtype; int id_shift; - } topo_layers[MAX_CACHE_LEVELS + 4]; + } topo_layers[MAX_CACHE_LEVELS + 5]; struct topo_node *parent; struct topo_node *node; int layer; int nlayers; int node_id; int i; +#if defined(DEV_ACPI) && MAXMEMDOM > 1 + int d, domain; +#endif if (cpu_topo_probed) return; @@ -582,6 +591,31 @@ topo_probe(void) topo_layers[nlayers].id_shift = 0; nlayers++; +#if defined(DEV_ACPI) && MAXMEMDOM > 1 + if (vm_ndomains > 1) { + for (layer = 0; layer < nlayers; ++layer) { + for (i = 0; i <= max_apic_id; ++i) { + if ((i & ((1 << topo_layers[layer].id_shift) - 1)) == 0) + domain = -1; + if (!cpu_info[i].cpu_present) + continue; + d = acpi_pxm_get_cpu_locality(i); + if (domain >= 0 && domain != d) + break; + domain = d; + } + if (i > max_apic_id) + break; + } + KASSERT(layer < nlayers, ("NUMA domain smaller than PU")); + memmove(&topo_layers[layer+1], &topo_layers[layer], + sizeof(*topo_layers) * (nlayers - layer)); + topo_layers[layer].type = TOPO_TYPE_NODE; + topo_layers[layer].subtype = CG_SHARE_NONE; + nlayers++; + } +#endif + topo_init_root(&topo_root); for (i = 0; i <= max_apic_id; ++i) { if (!cpu_info[i].cpu_present) @@ -589,7 +623,12 @@ topo_probe(void) parent = &topo_root; for (layer = 0; layer < nlayers; ++layer) { - node_id = i >> topo_layers[layer].id_shift; +#if defined(DEV_ACPI) && MAXMEMDOM > 1 + if (topo_layers[layer].type == TOPO_TYPE_NODE) { + node_id = acpi_pxm_get_cpu_locality(i); + } else +#endif + node_id = i >> topo_layers[layer].id_shift; parent = topo_add_node_by_hwid(parent, node_id, topo_layers[layer].type, topo_layers[layer].subtype); @@ -598,7 +637,12 @@ topo_probe(void) parent = &topo_root; for (layer = 0; layer < nlayers; ++layer) { - node_id = boot_cpu_id >> topo_layers[layer].id_shift; +#if defined(DEV_ACPI) && MAXMEMDOM > 1 + if (topo_layers[layer].type == TOPO_TYPE_NODE) + node_id = acpi_pxm_get_cpu_locality(boot_cpu_id); + else +#endif + node_id = boot_cpu_id >> topo_layers[layer].id_shift; node = topo_find_node_by_hwid(parent, node_id, topo_layers[layer].type, topo_layers[layer].subtype); @@ -773,14 +817,18 @@ x86topo_add_sched_group(struct topo_node *root, struct cpu_group *cg_root) int i; KASSERT(root->type == TOPO_TYPE_SYSTEM || root->type == TOPO_TYPE_CACHE || - root->type == TOPO_TYPE_GROUP, + root->type == TOPO_TYPE_NODE || root->type == TOPO_TYPE_GROUP, ("x86topo_add_sched_group: bad type: %u", root->type)); CPU_COPY(&root->cpuset, &cg_root->cg_mask); cg_root->cg_count = root->cpu_count; - if (root->type == TOPO_TYPE_SYSTEM) + if (root->type == TOPO_TYPE_CACHE) + cg_root->cg_level = root->subtype; + else cg_root->cg_level = CG_SHARE_NONE; + if (root->type == TOPO_TYPE_NODE) + cg_root->cg_flags = CG_FLAG_NODE; else - cg_root->cg_level = root->subtype; + cg_root->cg_flags = 0; /* * Check how many core nodes we have under the given root node. @@ -801,7 +849,7 @@ x86topo_add_sched_group(struct topo_node *root, struct cpu_group *cg_root) if (cg_root->cg_level != CG_SHARE_NONE && root->cpu_count > 1 && ncores < 2) - cg_root->cg_flags = CG_FLAG_SMT; + cg_root->cg_flags |= CG_FLAG_SMT; /* * Find out how many cache nodes we have under the given root node. @@ -813,10 +861,18 @@ x86topo_add_sched_group(struct topo_node *root, struct cpu_group *cg_root) nchildren = 0; node = root; while (node != NULL) { - if ((node->type != TOPO_TYPE_GROUP && - node->type != TOPO_TYPE_CACHE) || - (root->type != TOPO_TYPE_SYSTEM && - CPU_CMP(&node->cpuset, &root->cpuset) == 0)) { + if (CPU_CMP(&node->cpuset, &root->cpuset) == 0) { + if (node->type == TOPO_TYPE_CACHE && + cg_root->cg_level < node->subtype) + cg_root->cg_level = node->subtype; + if (node->type == TOPO_TYPE_NODE) + cg_root->cg_flags |= CG_FLAG_NODE; + node = topo_next_node(root, node); + continue; + } + if (node->type != TOPO_TYPE_GROUP && + node->type != TOPO_TYPE_NODE && + node->type != TOPO_TYPE_CACHE) { node = topo_next_node(root, node); continue; } @@ -841,9 +897,9 @@ x86topo_add_sched_group(struct topo_node *root, struct cpu_group *cg_root) i = 0; while (node != NULL) { if ((node->type != TOPO_TYPE_GROUP && + node->type != TOPO_TYPE_NODE && node->type != TOPO_TYPE_CACHE) || - (root->type != TOPO_TYPE_SYSTEM && - CPU_CMP(&node->cpuset, &root->cpuset) == 0)) { + CPU_CMP(&node->cpuset, &root->cpuset) == 0) { node = topo_next_node(root, node); continue; }