ABSTRACT
A heterogeneous chip multiprocessor (CMP) architecture consists of processor cores and caches of varying size and complexity. In a multi-programmed computing environment, threads of execution exhibit different run time characteristics and hardware resource requirements. So heterogeneous multiprocessor significantly out perform homogeneous multiprocessor system. Issues in designing and managing heterogeneity in multiprocessor have significant impact on overall system cost and performance. These issues are (a) replicating standard cores is an efficient strategy in homogeneous CMP design but in heterogeneous CMP architecture, particularly a fully custom heterogeneous processor not necessarily composed of pre-existing cores, incurs additional costs in design, verification, and testing, (b) in order to take advantage of a heterogeneous architecture, an appropriate policy to map running tasks to processor cores must be determined to maximize the performance of the whole system by accurately exploiting its resources, so a very good "software scheduler" require to take advantage of heterogeneity, and (c) processor speeds are improving at a much faster than the memories speed, as a result the data access time dominates the execution times of many programs. And in multiprocessor environment this gap increasing, as core count in chip multiprocessor increase, on-chip cache and also the off-chip memory bandwidth get scarcer to the cores.
In this paper, we propose a method of creating heterogeneity at run time by partitioning cache and memory bandwidth. In this case, we can take advantage of using pre-existing standard core in designing multiprocessor and a use of basic scheduler with out considering heterogeneity as heterogeneity is created at run time by partitioning cache and bandwidth. Also we have described a method of creating heterogeneity of system by coordinated partitioning of shared last level cache and off-chip memory bandwidth. We have proposed an efficient low overhead approach to partition the cache based on set wise partitioning by separating addressing part and data part, and along with graceful space acquirement policy. This approach quickly re-partitions the cache with minimum overhead and with smaller granularity. Also we have extended the bandwidth partition model which is based on CPI model to handle read/write access behavior of applications. Finally we have analyzed and experimentally evaluated six different cache partitioning schemes and concluded that partition based on available bandwidth partitioning and access frequency of L2 out perform others.
- T. Agerwala and S. Chatterjee. Computer Architecture: Challenges and Opportunities for the Next Decade. IEEE Micro, 2005. Google ScholarDigital Library
- R. Bitirgen, E. Ipek, and J. F. Martinez. Coordinated Management of Multiple Interacting Resources in Chip Multiprocessors: A Machine Learning Approach. In IEEE/ACM Microarchitecture, 2008. Google ScholarDigital Library
- E. Ebrahimi, C. J. Lee, O. Mutlu, and Y. N. Patt. Prefetch-Aware Shared Resource Management for Multi-core Systems. In Proc. of the 38th annual Int. Symp. on Computer architecture, ISCA '11, 2011. Google ScholarDigital Library
- J. Fang and J. Pu. Dynamic Fair Cache Partitioning for Chip Multiprocessor. In Proc. of Computational Science and Optimization, 2010. Google ScholarDigital Library
- J. L. Hennessy and D. A. Patterson. Computer Architecture, Fifth Edition: A Quantitative Approach. Morgan Kaufmann Publishers Inc., 2011. Google ScholarDigital Library
- E. Herrero, J. Gonzalez, and R. Canal. Elastic Cooperative Caching: An Autonomous Dynamically Adaptive Memory Hierarchy for Chip Multiprocessors. In Proc. of ISCA, 2010. Google ScholarDigital Library
- M. Kandemir, Y. Zhang, and O. Ozturk. Adaptive Prefetching for Shared Cache based Chip Multiprocessors. In DATE Conference, april 2009. Google ScholarDigital Library
- S. Kim, D. Chandra, and Y. Solihin. Fair Cache Sharing and Partitioning in a Chip Multiprocessor Architecture. In Proc of PACT, 2004. Google ScholarDigital Library
- K. Lahiri, A. Raghunathan, and G. Lakshminarayana. LOTTERYBUS: A new High-Performance Communication Architecture for System-on-Chip Designs. In Proc. of DAC, 2001. Google ScholarDigital Library
- T. Lee and H.-H. Tsou. A Novel Cache Mapping Scheme for Dynamic Set-based Cache Partitioning. In IEEE Info., Comp. and Telecomm., 2009.Google Scholar
- F. Liu, X. Jiang, and Y. Solihin. Understanding How Off-chip Memory Bandwidth Partitioning in Chip Multiprocessors Affects System Performance. In HPCA, pages 1--12, 2010.Google Scholar
- S. A. McKee. Reflections on the Memory Wall. In Proc. of Computing Frontiers, 2004. Google ScholarDigital Library
- T. Oh, K. Lee, and S. Cho. An Analytical Performance Model for Co-management of Last-Level Cache and Bandwidth Sharing. In Symp. on Modelling, Analysis, and Sim. of Comp. and Telecomm. Sys., 2011. Google ScholarDigital Library
- M. K. Qureshi and Y. N. Patt. Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches. In IEEE/ACM on Microarchitecture, 2006. Google ScholarDigital Library
- B. M. Rogers, A. Krishna, G. B. Bell, K. Vu, X. Jiang, and Y. Solihin. Scaling the Bandwidth Wall: Challenges in and Avenues for CMP Scaling. In Proc. ISCA, 2009. Google ScholarDigital Library
- A. Shahira, A. El-Mahdy, and S. Selim. Fair and Adaptive Online Set-Based Cache Partitioning. In Proc. of the IEEE Int. Conf. on Computer Engineering Systems, 2011.Google Scholar
- Y. Solihin, V. Lam, and J. Torrellas. Scal-Tool: Pinpointing and Quantifying Scalability Bottlenecks in DSM Multiprocessors. In Proc. of the ACM/IEEE conf. on Supercomputing, 1999. Google ScholarDigital Library
- R. Ubal, J. Sahuquillo, S. Petit, and P. Lopez. Multi2Sim: A Simulation Framework to Evaluate Multicore-Multithreaded Processors. In SBAC-PAD, pages 62--68. IEEE Computer Society, 2007.Google ScholarCross Ref
- C. Yu and P. Petrov. Off-chip Memory Bandwidth Minimization through Cache Partitioning for Multi-core Platforms. In Proc. of DAC, 2010. Google ScholarDigital Library
Index Terms
- Creating heterogeneity at run time by dynamic cache and bandwidth partitioning schemes
Recommendations
Dynamic Partitioning of Shared Cache Memory
This paper proposes dynamic cache partitioning amongst simultaneously executing processes/threads. We present a general partitioning scheme that can be applied to set-associative caches.
Since memory reference characteristics of processes/threads can ...
Cache Partitioning on Chip Multi-Processors for Balanced Parallel Scientific Applications
PDCAT '09: Proceedings of the 2009 International Conference on Parallel and Distributed Computing, Applications and TechnologiesNowadays, more and more supercomputers are built on multi-core processors with shared caches. However, the conflict accesses to shared cache from different threads or processes become a performance bottleneck for parallel applications. Cache ...
Code-based cache partitioning for improving hardware cache performance
ICUIMC '12: Proceedings of the 6th International Conference on Ubiquitous Information Management and CommunicationRecently, improving hardware cache performance is getting more important, because the performance gap between processor and memory has caused "memory wall" problem. Most cache designs are based on the LRU replacement policy which is effective for high-...
Comments