ABSTRACT
It is common for computers to have multi-level caches. This piece of work revolves around one question: Are all levels needed by all applications during all phases of their execution?, especially in the multi programmed scenario where giving the entire cache to one application and depriving the other might actually increase the performance.
- G. Memik et al. Just Say No: Benefits of Early Cache Miss Determination. In HPCA, 2003. Google ScholarDigital Library
- A. Jaleel et al. High performance cache replacement using re-reference interval prediction (RRIP). In ISCA, 2010. Google ScholarDigital Library
- M. Hayenga et al. MadCache: A PC-aware cache insertion policy. In JWAC, 2010.Google Scholar
- Y. Xie et al. Scalable shared-cache management by containing thrashing workloads. In HiPEAC, 2010. Google ScholarDigital Library
Index Terms
- SkipCache: miss-rate aware cache management
Recommendations
Two Fast and High-Associativity Cache Schemes
A traditional implementation of the set-associative cache has the disadvantage of longer access cycle times than that of a direct-mapped cache. Several methods have been proposed for implementing associativity in non-traditional ways. However, most of ...
NCID: a non-inclusive cache, inclusive directory architecture for flexible and efficient cache hierarchies
CF '10: Proceedings of the 7th ACM international conference on Computing frontiersChip-multiprocessor (CMP) architectures employ multi-level cache hierarchies with private L2 caches per core and a shared L3 cache like Intel's Nehalem processor and AMD's Barcelona processor. When designing a multi-level cache hierarchy, one of the key ...
Increasing hardware data prefetching performance using the second-level cache
Techniques to reduce or tolerate large memory latencies are critical for achieving high processor performance. Hardware data prefetching is one of the most heavily studied solutions, but it is essentially applied to first-level caches where it can ...
Comments