skip to main content
10.1145/3532213.3532250acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccaiConference Proceedingsconference-collections
research-article

PACP: A Prefetch-aware Multi-core Shared Cache Partitioning Strategy

Authors Info & Claims
Published:13 July 2022Publication History

ABSTRACT

In multi-core systems, hardware prefetchers aggravate the preemption of some access-intensive programs for shared last level cache (LLC) resources, resulting in lower system performance. As a solution, we propose a prefetch-aware multi-core shared cache partition (PACP) strategy to tackle the conundrum. According to the performance changes of programs after prefetch is enabled, each program is classified as either prefetch-sensitive or non-prefetch-sensitive, and then cache resources are dynamically allocated according to the cache utilization and memory bandwidth changes of programs on the core. ChampSim is used to simulate the strategy. Experimental results show that the performance of 4-core and 16-core systems can be improved by 17.54% and 12.76% on average, respectively. PACP effectively balances the size of cache space between different cores and reduce interference, thus improving system performance.

References

  1. Kalani N S, Panda B. Instruction Criticality Based Energy-Efficient Hardware Data Prefetching[J]. IEEE Computer Architecture Letters, 2021, 20(2): 146-149.Google ScholarGoogle ScholarCross RefCross Ref
  2. Lee J, Kim H, Vuduc R. When prefetching works, when it doesn't, and why[J]. ACM Transactions on Architecture and Code Optimization (TACO), 2012, 9(1): 1-29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Alshammari A, Almalki R, Alshammari R. Developing a Predictive Model of Predicting Appointment No-Show by Using Machine Learning Algorithms[J]. Journal of Advances in Information Technology Vol, 2021, 12(3).Google ScholarGoogle ScholarCross RefCross Ref
  4. Brock J, Ye C, Ding C, Optimal cache partition-sharing[C]//2015 44th International Conference on Parallel Processing. IEEE, 2015: 749-758.Google ScholarGoogle Scholar
  5. Qureshi M K . Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches[J]. Proceedings of Intl.symp.on Microarchitecture, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Xie Y, Loh G H. PIPP: promotion/insertion pseudo-partitioning of multi-core shared caches[J]. Computer architecture news, 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Li Z , Ju L , Dai H , Set variation-aware shared LLC management for CPU-GPU heterogeneous architecture[C]// 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2018.Google ScholarGoogle Scholar
  8. Selfa V , Sahuquillo J , Eeckhout L , Application Clustering Policies to Address System Fairness with Intel's Cache Allocation Technology[C]// International Conference on Parallel Architectures & Compilation Techniques. IEEE, 2017.Google ScholarGoogle Scholar
  9. El-Sayed N , Mukkara A , Tsai P A , KPart: A Hybrid Cache Partitioning-Sharing Technique for Commodity Multicores[C]// 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2018.Google ScholarGoogle Scholar
  10. SUN G, SHEN J, VEIDENBAUM V. Combining Prefetch Control and Cache Partitioning to Improve Multicore Performance[C]// Proceedings of 2019 IEEE International Parallel and Distributed Processing Symposium. Rio de Janeiro, Brazil: IEEE Press, 2019: 953−962.Google ScholarGoogle Scholar
  11. Xiao J, Pimentel A D, Liu X. CPpf: a prefetch aware LLC partitioning approach[C]//Proceedings of the 48th International Conference on Parallel Processing. 2019: 1-10.Google ScholarGoogle Scholar
  12. Chen X, Chang L W, Rodrigues C I, Adaptive cache management for energy-efficient GPU computing[C]//2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE, 2014: 343-355Google ScholarGoogle Scholar
  13. Liu D, Yang C. Caching at base stations with heterogeneous user demands and spatial locality[J]. IEEE Transactions on Communications, 2018, 67(2): 1554-1569.Google ScholarGoogle ScholarCross RefCross Ref
  1. PACP: A Prefetch-aware Multi-core Shared Cache Partitioning Strategy

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICCAI '22: Proceedings of the 8th International Conference on Computing and Artificial Intelligence
      March 2022
      809 pages
      ISBN:9781450396110
      DOI:10.1145/3532213

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 July 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format