Cubieboard: Spurious interrupt detected

Warner Losh imp at bsdimp.com
Sat Sep 6 23:09:43 UTC 2014


On Sep 6, 2014, at 3:18 PM, Ian Lepore <ian at FreeBSD.org> wrote:

> On Sat, 2014-09-06 at 14:20 -0600, Warner Losh wrote:
>> On Sep 6, 2014, at 8:54 AM, Ian Lepore <ian at FreeBSD.org> wrote:
>> 
>>> On Fri, 2014-09-05 at 23:45 -0700, Adrian Chadd wrote:
>>>> The device itself may have FIFOs and internal busses that also need to
>>>> be flushed.
>>>> 
>>> 
>>> The question isn't whether or not it's sufficient, because it's
>>> necessary.  The device driver knows what its hardware requirements are
>>> and should meet them.  It doesn't not know what its parent bus
>>> requirements are, and that's why it must call bus_space_barrier() to
>>> handle architecture needs above the level of the device itself.
>> 
>> Yea, all that bus_space_barrier() does is say “We´ve made sure that
>> the CPU and all other bridges between the CPU and the device have
>> any buffered writes pushed to the device.” If the device has additional FIFOs
>> and other things, that´s 100% on the device writer.
>> 
>>>>> I was just looking at i386's implementation of bus_space_barrier and
>>>>> it just does a stack access...  This won't be sufficient to clear any
>>>>> PCI bridges that may have the write still pending...
>>>> 
>>>> Right. The memory barrier semantics right now don't at all guarantee
>>>> that bus and device FIFOs have actually been flushed.
>>>> 
>>> The fact that some architectures don't implement bus_space_barrier() in
>>> a way that's useful for that architecture is just a bug.  It doesn't
>>> change the fact that bus_space_barrier() is currently our only defined
>>> MI interface to barriers in device space.
>> 
>> Agreed. But PCI defines that reads flush out all prior writes.
>> 
>>>> So I don't think doing it using the existing bus space barrier
>>>> semantics is 'right'. For interrupts, it's highly likely that we do
>>>> actually need device drivers to read from their interrupt register to
>>>> ensure the update has been posted before returning. That's better than
>>>> causing entire L2 cache flushes.
>>>> 
>>> 
>>> Where did you see code that does an "entire L2 cache flush"?  You
>>> didn't, you just saw a function name and made assumptions about what it
>>> does.  The fact is, it does what is necessary for the architecture.  It
>>> also happens to do what a write-then-read does on armv6, but that's
>>> exactly the sort of assumption that should NOT be written into MI
>>> code.  
>> 
>> Yea, a bus barrier just means a temporal ordering. The exact strength
>> of that guarantee is a bit fuzzy, but generally it means we know things
>> are done. L2 is usually not an issue, because we have the devices mapped
>> uncached.
>> 
> 
> It more complicated than that in the armv6/7 world.  They are mapped as
> Device memory which means uncached but writes are buffered (using some
> rules specifically designed to work for memory mapped devices, such as
> disabling write-combining so that N writes issued results in N writes at
> the device).  The buffering happens at the L2 cache controller, so when
> you need to ensure that the write has reached the hardware you can make
> a call to an L2 routine that blocks until the write has completed.
> 
> On armv7 you can also do a read of any address within the same 1K
> address block as the write you want to have completed, but I don't think
> any driver should contain code like that unless it's for soc-specific
> hardware.  Like code for an on-chip timer might be able to make that
> assumption, but an EHCI driver couldn’t.

Wouldn’t the bus_space_barrier() block until the write to the bus space area
flushes? Or does our API make that kinda tough to implement?

>> As for reading the ISR, that is device dependent. When using MSI things are
>> different because the status is pushed to the host and you get the info from reading
>> the host memory. Ideally, you´d want to write to ack them without having to do
>> a read over PCIe, but even that´s not always required (and on such devices
>> they would correct without bus barriers). Newer devices have been designed
>> to avoid round-trips over the PCIe bus and having semantics that matter.
>> 
>>>> Question is - can we expose this somehow as a generic device method,
>>>> so the higher bus layers can actually do something with it, or should
>>>> we just leave it to device drivers to correctly do?
>>>> 
>>> 
>>> In what way is that not EXACTLY whas bus_space_barrier() is defined to
>>> do?
>>> 
>>> I've got to say, I don't understand all this pushback.  We have one
>>> function defined already that's intended to be the machine-independant
>>> way to ensure that any previous access to hardware using bus_space reads
>>> and writes has completed, so why all this argument that it is not the
>>> function to use?
>> 
>> I agree with Ian here. If we´ve messed up, we need to fix that. But for the most
>> part, devices that are in embedded land actually do the right thing (more often
>> than not). If we´re doing something wrong on coherent architectures that accidentally
>> works, we should fix that.
>> 
>> I may disagree with Ian about the need for some of the flushing based on the notion
>> that we should fix the drivers. I feel it just makes the problem persist. Things should
>> be more visibly broken. Ian thinks things should work, and I have a hard time arguing
>> with that position even if I disagree :).
> 
> It has to work because if it doesn't then they'll start running linux on
> imx6 at work and I'll have to look for a new job. :)

Like I said, it is hard to argue with it…

I’m curious if you know which drivers are broken? Is this list of known broken long, or is
this just a case of “at least one was broken, so let’s be conservative for now to get the
thing working?” If the latter, is there any way we can see what’s broken and get bugzillas
going on them?

Warner
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.freebsd.org/pipermail/freebsd-arm/attachments/20140906/449a1734/attachment.sig>


More information about the freebsd-arm mailing list