ABSTRACT
LRU is the de facto standard page replacement strategy. It is well-known, however, that there are many situations where LRU behaves far from optimal. We present a replacement policy that approximates the optimal algorithm OPT more closely by predicting the time each page will be referenced again and by evicting the page that has the largest predicted time of next reference. Experiments using several benchmarks from the SPEC 2000 benchmark suite show that our algorithm is superior to LRU, in some cases by as much as 25%-30% and in one case by more than 100%.
- Jean-Loup Baer and Tien-Fu Chen. An Effective On-Chip Preloading Scheme to Reduce Data Access Penalty. In Proc. Supercomputing'91, 1991. Google ScholarDigital Library
- A. Borodin, S. Irani, P. Raghavan, and B. Schieber. Competitive Paging With Locality of Reference. In Proc. Symp. on Theory of Computing, 1991. Google ScholarDigital Library
- D. Burger and T.M. Austin. The SimpleScalar Tool Set, Version 2.0. Technical Report 1342, Univ. of Wisconsin-Madison, Comp. Sci. Dept., 1997.Google ScholarDigital Library
- Amos Fiat and Ziv Rosen. Experimental Studies of Access Graph Based Heuristics: Beating the LRU Standard? In Proc. 8th ACM-SIAM Symposium on Discrete Algorithms (SODA'97), pages 63--72, 1997. Google ScholarDigital Library
- J.W.C. Fu, J. H. Patel, and Bob L. Janssens. Stride Directed Prefetching in Scalar Processors. In Proc. Int. Symp. on Microarchitecture, 1992. Google ScholarDigital Library
- J.L. Hennessy and D.A. Patterson. Computer Architecture - A Quantitative Approach. Morgan Kaufmann, third edition, 2003. Google ScholarDigital Library
- Sandy Irani, Anna R. Karlin, and Steven Phillips. Strongly Competitive Algorithms for Paging With Locality of Reference. SIAM Journal on Computing, 25(3), 1996. Google ScholarDigital Library
- Yvon Jegou and Olivier Temam. Speculative Prefetching. In Proc. Int. Conf. on Supercomputing, pages 57--66, 1993. Google ScholarDigital Library
- T. Johnson and D. Shasha. 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm. In Proc. 20th Int. Conf. on Very Large Data Bases, pages 439--450, 1994. Google ScholarDigital Library
- A.J. KleinOsowski, John Flynn, Nancy Meares, and David J. Lilja. Adapting the SPEC 2000 Benchmark Suite for Simulation-Based Computer Architecture Research. In Proc. Workshop on Workload Characterization, Int. Conf. on Computer Design (ICCD), 2000.Google Scholar
- Nimrod Megiddo and D. S. Modha. ARC: A Self-tuning, Low Overhead Replacement Cache. In Proc. 2nd USENIX Conf. on File and Storage Technologies, 2003. Google ScholarDigital Library
- Elizabeth J. O'Neil, Patrick E. O'Neil, and Gerhard Weikum. The LRU-K Page Replacement Algorithm for Database Disk Buffering. In Proc. ACM SIGMOD Int. Conf. on Management of Data, pages 297--306, 1993. Google ScholarDigital Library
- D.D. Sleator and R.E. Tarjan. Amortized Efficiency of List Update and Paging Rules. Communications of the ACM, 28(2):202--208, 1985. Google ScholarDigital Library
- R. A. Sugumar and S. G. Abraham. Efficient Simulation of Caches Under Optimal Replacement With Applications to Miss Characterization. In Proc. ACM SIGMETRICS Conf. on Measurement and Modeling Computer Systems, 1993. Google ScholarDigital Library
- Wayne A. Wong and Jean-Loup Baer. Modified LRU Policies for Improving Second-Level Cache Behavior. In Proc. Int. Symp. on High-Performance Computer Architecture, 2000.Google Scholar
Index Terms
- Approximating the optimal replacement algorithm
Recommendations
The EELRU adaptive replacement algorithm
The wide performance gap between processors and disks ensures that effective page replacement remains an important consideration in modern systems. This paper presents early eviction LRU (EELRU), an adaptive replacement algorithm. EELRU uses aggregate ...
An Associativity Threshold Phenomenon in Set-Associative Caches
SPAA '23: Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and ArchitecturesIn an α-way set-associative cache, the cache is partitioned into disjoint sets of size α, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms ...
Minor memory references matter in collaborative caching
MSPC '11: Proceedings of the 2011 ACM SIGPLAN Workshop on Memory Systems Performance and CorrectnessCollaborative caching uses different caching methods, e. g., LRU and MRU, for data with good or poor locality. Poorlocality data are evicted by MRU quickly, leaving most cache space to hold good-locality data by LRU. In our previous study, we selected ...
Comments