svn commit: r295557 - head/sys/dev/uart

Ian Lepore ian at freebsd.org
Tue Feb 16 00:33:31 UTC 2016


On Tue, 2016-02-16 at 11:01 +1100, Bruce Evans wrote:
> On Mon, 15 Feb 2016, Ian Lepore wrote:
> 
> > On Tue, 2016-02-16 at 09:28 +1100, Bruce Evans wrote:
> >> On Mon, 15 Feb 2016, Michal Meloun wrote:
> >>
> >>> [...]
> >>> Please note that ARM architecture does not have vectored interrupts,
> >>> CPU must read actual interrupt source from external interrupt
> >>> controller (GIC) register. This register contain predefined value if
> >>> none of interrupts are active.
> >>>
> >>> 1 - CPU1: enters ns8250_bus_transmit() and sets IER_ETXRDY.
> >>> 2 - HW: UART interrupt is asserted, processed by GIC and signaled
> >>>    to CPU2.
> >>> 3 - CPU2: enters interrupt service.
> >>
> >> It is blocked by uart_lock(), right?
> >>
> >>> 4 - CPU1: writes character to into REG_DATA register.
> >>> 5 - HW: UART clear its interrupt request
> >>> 6 - CPU2: reads interrupt source register. No active interrupt is
> >>>    found, spurious interrupt is signaled, and CPU leaves interrupted
> >>>    state.
> >>> 7 - CPU1: executes uart_barrier(). This function is not empty on ARM,
> >>>    and can be slow in some cases.
> >>
> >> It is not empty even on x86, although it probably should be.
> >>
> >> BTW, if arm needs the barrier, then how does it work with
> >> bus_space_barrier() referenced in just 25 files in all of /sys/dev?
> >
> > With a hack, of course.  In the arm interrupt-controller drivers we
> > always call bus_space_barrier() right before doing an EOI.  It's not a
> > 100% solution, but in practice it seems to work pretty well.
> 
> I thought about the x86 behaviour a bit more and now see that it does
> need barriers but not the ones given by bus_space_barrier().  All (?)
> interrupt handlers use mutexes (if not driver ones, then higher-level
> ones).   These might give stronger or different ordering than given by
> bus_space_barrier().  On x86, they use the same memory bus lock as
> the bus_space_barrier().  This is needed to give ordering across
> CPUs.  But for accessing a single device, you only need program order
> for a single CPU.  This is automatic on x86 provided a mutex is used
> to prevent other CPUs accessing the same device.  And if you don't use
> a mutex, then bus_space_barrier() cannot give the necessary ordering
> since if cannot prevent other CPUs interfering.
> 
> So how does bus_space_barrier() before EOI make much difference?  It
> doesn't affect the order for a bunch of accesses on a single CPU.
> It must do more than a mutex to do something good across CPUs.
> Arguably, it is a bug in mutexes is they don't gives synchronization
> for device memory.
> 
> > ...
> > The hack code does a drain-write-buffer which doesn't g'tee that the
> > slow peripheral write has made it all the way to the device, but it
> > does at least g'tee that the write to the bus the perhiperal is on has
> > been posted and ack'd by any bus<->bus bridge, and that seems to be
> > good enough in practice.  (If there were multiple bridged busses
> > downstream it probably wouldn't be, but so far things aren't that
> > complicated inside the socs we support.)
> 
> Hmm, so there is some automatic strong ordering but mutexes don't
> work for device memory?
> 

I guess you keep mentioning mutexes because on x86 their implementation
uses some of the same instructions that are involved in bus_space
barriers on x86?  Otherwise I can't see what they have to do with
anything related to the spurious interrupts that happen on arm.  (You
also mentioned multiple CPUs, which is not a requirement for this
trouble on arm, it'll happen with a single core.)

The piece of info you're missing might be the fact that memory-mapped
device registers on arm are mapped with the Device attribute which
gives stronger ordering than Normal memory.  In particular, writes are
in order and not combined, but they are buffered.  In some designs
there are multiple buffers, so there can be multiple writes that
haven't reached the hardware yet.  A read from the same region will
stall until all writes to that region are done, and there is also an
instruction that specifically forces out the buffers and stalls until
they're empty.

Without doing the drain-write-buffer (or a device read) after each
write, the only g'tee you'd get is that each device sees the writes
directed at it in the order they were issued.  With devices A and B,
you could write a sequence of A1 B1 A2 B2 A3 B3 and they could arrive
at the devices as A1 A2 B1 B2 A3 B3, or any other permutation, as long
as device A sees 123 and device B sees 123.

So on arm the need for barriers arises primarily when two different
devices interact with each other in some way and it matters that a
series of interleaved writes reaches the devices in the same relative
order they were issued by the cpu.  That condition mostly comes up only
in terms of the PIC interacting with basically every other device. 

I expect trouble to show up any time now as we start implementing DMA
drivers in socs that have generic DMA engines that are only loosely
coupled to the devices they're moving data for.  That just seems like
another place where a single driver is coordinating the actions of two
different pieces of hardware that may be on different busses, and it's
ripe for the lack of barriers to cause rare or intermittant failures.

-- Ian



More information about the svn-src-head mailing list