svn commit: r348490 - head/sys/powerpc/aim
Justin Hibbits
jhibbits at FreeBSD.org
Sat Jun 1 01:40:16 UTC 2019
Author: jhibbits
Date: Sat Jun 1 01:40:14 2019
New Revision: 348490
URL: https://svnweb.freebsd.org/changeset/base/348490
Log:
powerpc/moea: Fix moea64 native VA invalidation
Summary:
moea64_insert_pteg_native()'s invalidation only works by happenstance.
The purpose of the shifts and XORs is to extract the VSID in order to
reverse-engineer the lower bits of the VPN. Currently a segment size is 256MB
(2**28), and ADDR_API_SHFT64 is 16, so ADDR_PIDX_SHIFT is equivalent. However,
it's semantically incorrect, in that we don't want to shift by the page shift
size, we want to shift to get to the VSID.
Tested by: bdragon
Differential Revision: https://reviews.freebsd.org/D20467
Modified:
head/sys/powerpc/aim/moea64_native.c
Modified: head/sys/powerpc/aim/moea64_native.c
==============================================================================
--- head/sys/powerpc/aim/moea64_native.c Sat Jun 1 01:22:21 2019 (r348489)
+++ head/sys/powerpc/aim/moea64_native.c Sat Jun 1 01:40:14 2019 (r348490)
@@ -646,15 +646,12 @@ moea64_insert_to_pteg_native(struct lpte *pvo_pt, uint
* "Modifying a Page Table Entry". Need to reconstruct
* the virtual address for the outgoing entry to do that.
*/
- if (oldptehi & LPTE_BIG)
- va = oldptehi >> moea64_large_page_shift;
- else
- va = oldptehi >> ADDR_PIDX_SHFT;
+ va = oldptehi >> (ADDR_SR_SHFT - ADDR_API_SHFT64);
if (oldptehi & LPTE_HID)
va = (((k >> 3) ^ moea64_pteg_mask) ^ va) &
- VSID_HASH_MASK;
+ (ADDR_PIDX >> ADDR_PIDX_SHFT);
else
- va = ((k >> 3) ^ va) & VSID_HASH_MASK;
+ va = ((k >> 3) ^ va) & (ADDR_PIDX >> ADDR_PIDX_SHFT);
va |= (oldptehi & LPTE_AVPN_MASK) <<
(ADDR_API_SHFT64 - ADDR_PIDX_SHFT);
PTESYNC();
More information about the svn-src-all
mailing list