n_slbs==32 vs. restore_kernsrs use of slbmte in its loop
Mark Millard
marklmi at yahoo.com
Wed May 1 00:08:47 UTC 2019
When aim_early_init sets n_slbs=32, the code in
restore_kernsrs does not seem to respect the figure
and loops through doing 64-1 slbmte instances:
restore_kernsrs:
GET_CPUINFO(%r28)
addi %r28,%r28,PC_KERNSLB
ld %r29,16(%r28) /* One past USER_SLB_SLOT */
cmpdi %r29,0
beqlr /* If first kernel entry is invalid,
* SLBs not in use, so exit early */
/* Otherwise, set up SLBs */
li %r29, 0 /* Set the counter to zero */
slbia
slbmfee %r31,%r29
clrrdi %r31,%r31,28
slbie %r31
1: cmpdi %r29, USER_SLB_SLOT /* Skip the user slot */
beq- 2f
ld %r31, 8(%r28) /* Load SLBE */
cmpdi %r31, 0 /* If SLBE is not valid, stop */
beqlr
ld %r30, 0(%r28) /* Load SLBV */
slbmte %r30, %r31 /* Install SLB entry */
2: addi %r28, %r28, 16 /* Advance pointer */
addi %r29, %r29, 1
cmpdi %r29, 64 /* Repeat if we are not at the end */
blt 1b
blr
Note the "64" in the last cmpd after %r29 is incremented
by 1 --and the following blt.
If I gather right, instead of 32-1 kernel slbmte assignments
when n_slbs==32, this continues on to try for 64-1 assignments.
(The "-1"s being for the USER_SLB_SLOT avoidance.)
Is this okay for some reason? Guaranteed special values
in %r30 and %r31 that avoid problems? (Not that n_slbs==32
is a G5 context. I'm not claiming this contributes to what
I've been looking into. This constant 64 just looks odd given
the variability in the n_slbs value.)
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
More information about the freebsd-ppc
mailing list