RFT: numa policy branch

Rui Paulo rpaulo at me.com
Tue Apr 28 22:39:33 UTC 2015


On Apr 26, 2015, at 01:30 PM, Adrian Chadd <adrian at freebsd.org> wrote:

Hi!

Another update:

* updated to recent -HEAD;
* numactl now can set memory policy and cpuset domain information - so
it's easy to say "this runs in memory domain X and cpu domain Y" in
one pass with it;
 
That works, but --mempolicy=first-touch should ignore the --memdomain argument (or print an error) if it's present.

* the locality matrix is now available. Here's an example from scott's
2x haswell v3, with cluster-on-die enabled:

vm.phys_locality:
0: 10 21 31 31
1: 21 10 31 31
2: 31 31 10 21
3: 31 31 21 10

And on the westmere-ex box, with no SLIT table:

vm.phys_locality:
0: -1 -1 -1 -1
1: -1 -1 -1 -1
2: -1 -1 -1 -1
3: -1 -1 -1 -1
 
This worked for us on IvyBridge a SLIT table.

* I've tested in on westmere-ex (4x socket), sandybridge, ivybridge,
haswell v3 and haswell v3 cluster on die.
* I've discovered that our implementation of libgomp (from gcc-4.2) is
very old and doesn't include some of the thread control environment
variables, grr.
* .. and that the gcc libgomp code doesn't at all have freebsd thread
affinity routines, so I added them to gcc-4.8.
 
I used gcc 4.9

I'd appreciate any reviews / testing people are able to provide. I'm
about at the functionality point where I'd like to submit it for
formal review and try to land it in -HEAD.
 
There's a bug in the default sysctl policy.  You're calling strcat on an uninitialised string, so it produces garbage output.  We also hit the a panic when our application starts allocation many GBs of memory.  In this case, the memory is split between two sockets and I think it's crashing like you described on IRC.




More information about the freebsd-arch mailing list