Elimination of cpu_l2cache_* functions

Marcel Moolenaar xcllnt at mac.com
Wed Feb 9 19:28:06 UTC 2011


On Feb 9, 2011, at 9:34 AM, Mark Tinguely wrote:

> On 2/9/2011 10:25 AM, Marcel Moolenaar wrote:
>> On Feb 9, 2011, at 1:56 AM, Olivier Houchard wrote:
>> 
>>> Hi Marcel,
>>> 
>>> On Mon, Feb 07, 2011 at 10:43:54AM -0800, Marcel Moolenaar wrote:
>>>> All,
>>>> 
>>>> I've been reviewing the use of the cpu_l2cache_* functions and found
>>>> that 1) they're missing from cpu_witch() and 2) they are always used
>>>> in conjunction with either cpu_idcache_* or cpu_dcache_*.
>>>> 
>>>> Since most CPU variants define them as null ops, isn't it better to
>>>> incorporate the functionality of cpu_l2cache_* in cpu_idcache_* and
>>>> cpu_dcache_* and eliminate them altogether?
>>>> 
>>>> Any objections to me removing cpu_l2cache_* and therefore changing
>>>> the semantics of cpu_idcache_* and cpu_dcahce_* to apply to all
>>>> relevant cache levels?
>> Hi Olivier, good to hear from you,
>> 
>>> I chose to make the l2cache functions separate from the [i]dcache functions
>>> because there's a number of cases where L1 cache flush was needed, but not L2,
>>> and that would be a performance penalty to do both.
>> I'll take it from this that the L2 is PIPT for the Xscale core 3
>> as well, right?
>> 
>>> Also, more CPU variants define them as null ops now, but most new arm cpus
>>> come with a L2 cache,, so we need to think about it carefully.
>> Agreed. If the L2 cache is PIPT, then we should not do tie L1&  L2
>> together and I'd like to change the code to remove the L2 cache
>> operations from most places where we have them now.
> 
> My point is the L2 caches better be PIPT. If the L2 cache are virtual indexed and we do not flush them on context change, then we could have multiple copies in the L2 cache when we share a page and the width of the level 2 cache is larger than a page.
> 
> It only make sense from the hardware design side to make the L2 cache PIPT.

I have no problem with VIVT L2 caches. You deal with L2 anywhere you
deal with L1. In other words, you deal with them in cpu_idcache_* and
cpu_dcache_*.

That's probably also why cpu_l2cache_* is a sub-optimimal name. It's
not so much a distinction between L1 & L2, but rather a distinction
between VIVT & PIPT that is significant here.

>> What I'm thinking about is the following: introduce pmap_switch(),
>> as we have it on ia64. This function is called from cpu_switch to
>> replace the existing active pmap with a new one. In pmap_switch()
>> we flush the VIVT caches *IFF* the new pmap is different from the
>> old (=currently active) pmap. Consequently, we're not going to
>> flush the VIVT caches when we switch between kernel threads, nor
>> do we flush the caches when we switch between threads in the
>> same process. In all other cases we'll flush the VIVT caches.
>> 
>> pmap_switch() is also called when a pmap interface function gets
>> a pmap to work on. The interface function switches the pmap, (if
>> applicable) which may or may not force a VIVT cache operation.
>> The pmap interface function does it's work, after which it switches
>> back to the pmap that was active on entry to the function. This
>> then could also trigger VIVT cache operations.
>> 
>> In any case: I'm thinking that this removes most of the explicit
>> calls to the cache functions while still guaranteing coherency.
>> 
>> I need to look into the aliasing case to see how that is handled.
>> I have some ideas for that too...
>> 
>> Thoughts?
>> 
> There are places we can remove redundant cache operations; pmap_qenter() comes to mind.
> 
> A lot of the cache operations outside of a context switch occur because we share a page within the same  memory map (I think that is what you mean by the aliasing case), we turn access or writing off, and for dma. For VIVT caches, I can't see these operations going away. The page copying and zeroing are other examples and it seems like they need cache operations.

As I said, I need to look at the current implementation, but in
general my thinking is that you allow only 1 of the aliased VAs
to be "active" or mapped. All other VAs that map to the same PA
should cause a page fault. Handling the page fault should then:
1.  flush the VIVT caches for the currently mapped VA.
2.  Remove the currently mapped VA.
3.  Add the new VA->PA mapping to satisfy the page fault.

This assumes that we're not concurrently accessing the data through
multiple aliased VAs.

This, with pmap_switch() seems to be a lot more transparent, easier
to comprehend and hopefully actually solves all outstanding problems
we still have and it's hopefully more optimimal than what we have
now.

I have no proof that these ideas actually work or actually work more
efficiently. That's why I'm discussing it here :-)

...

-- 
Marcel Moolenaar
xcllnt at mac.com





More information about the freebsd-arm mailing list