svn commit: r339618 - head/sys/compat/linuxkpi/common/include/linux

Tijl Coosemans tijl at FreeBSD.org
Sun Nov 18 21:09:56 UTC 2018


On Sun, 18 Nov 2018 12:10:25 -0800 Matthew Macy <mat.macy at gmail.com> wrote:
>> Note that these functions are normally used on uncacheable memory which
>> is strongly ordered on x86.  There should be no reordering at all.  On
>> PowerPC barrier instructions are needed to prevent reordering.  
> 
> Correct. The current lkpi implementation also assumes that device
> endian == host endian. The Linux generic accessors will do use endian
> macros to byte swap where necessary.

Yes, these functions are used to access little-endian registers so byte
swapping is needed on big-endian machines.  For PowerPC Linux also defines
functions to access big-endian registers, but we probably don't need those.

> The following change fixes radeon attach issues:
> https://github.com/POWER9BSD/freebsd/commit/be6c98f5c2e2ed9a4935ac5b67c468b75f3b4457

+/* prevent prefetching of coherent DMA data ahead of a dma-complete */
+#ifndef __io_ar
+#ifdef rmb
+#define __io_ar()      rmb()
+#else
+#define __io_ar()      __compiler_membar();
+#endif
+#endif
+
+/* flush writes to coherent DMA data before possibly triggering a DMA read */
+#ifndef __io_bw
+#ifdef wmb
+#define __io_bw()      wmb()
+#else
+#define __io_bw()      __compiler_membar();
+#endif
+#endif

...

 static inline uint16_t
 readw(const volatile void *addr)
 {
 	uint16_t v;
 
-	__compiler_membar();
-	v = *(const volatile uint16_t *)addr;
-	__compiler_membar();
+	__io_br();
+	v = le16toh(__raw_readw(addr));
+	__io_ar();
 	return (v);
 }

For x86 rmb and wmb are defined as lfence and sfence instructions which
shouldn't be necessary here.


More information about the svn-src-all mailing list