ABSTRACT
In this paper, we provide the authors? retrospective analysis of the paper "Bloom Filtering Cache Misses for Accurate Data Speculative and Prefetching" which was published in the proceedings of 2002 International Conference on Supercomputing.
DOI: http://dx.doi.org/10.1145/514191.514219
- J.-K. Peir, W. Hsu, and A. Smith, "Functional Implementation Techniques for CPU Cache Memories," IEEE Trans. on Computers, Special Issue on Cache Memory, 48(2), pp. 100--110, 1999. Google ScholarDigital Library
- A. Yoaz, M. Erez, R. Ronen, and S. Juordan, "Speculation Techniques for Improving Load Related Instruction Scheduling," Proc. 26th Int'l Symp. On Computer Architecture, 1999. Google ScholarDigital Library
- L. Liu, "Cache Design with Partial Address Matching," Proc. 27th Int'l Symp. On Microarchitecture, 1994. Google ScholarDigital Library
- B. Bloom, "Space / Time Trade-offs in Hash Coding with Allowable Errors," Comm. ACM, 13(7), pp. 422--426, 1970. Google ScholarDigital Library
- L. Fan, P. Cao, J. Almeida, and A. Broder, "Summary Cache: A Scalable Wide-area Web Cache Sharing Protocol," IEEE/ACM Trans. On Networking, 8(3), pp. 281--293, 2000. Google ScholarDigital Library
Index Terms
- Author retrospective for bloom filtering cache misses for accurate data speculation and prefetching
Recommendations
Bloom filtering cache misses for accurate data speculation and prefetching
ACM International Conference on Supercomputing 25th Anniversary VolumeA processor must know a load instruction's latency to schedule the load's dependent instructions at the correct time. Unfortunately, modern processors do not know this latency until well after the dependent instructions should have been scheduled to ...
Bloom filtering cache misses for accurate data speculation and prefetching
ICS '02: Proceedings of the 16th international conference on SupercomputingA processor must know a load instruction's latency to schedule the load's dependent instructions at the correct time. Unfortunately, modern processors do not know this latency until well after the dependent instructions should have been scheduled to ...
Increasing hardware data prefetching performance using the second-level cache
Techniques to reduce or tolerate large memory latencies are critical for achieving high processor performance. Hardware data prefetching is one of the most heavily studied solutions, but it is essentially applied to first-level caches where it can ...
Comments