Skip to main content

Compiler Optimizations Using Data Compression to Decrease Address Reference Entropy

  • Conference paper
Languages and Compilers for Parallel Computing (LCPC 2002)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 2481))

  • 588 Accesses

Abstract

In modern computers, a single “random” access to main memory often takes as much time as executing hundreds of instructions. Rather than using traditional compiler approaches to enhance locality by interchanging loops, reordering data structures, etc., this paper proposes the radical concept of using aggressive data compression technology to improve hierarchical memory performance by reducing memory address reference entropy.

In some cases, conventional compression technology can be adapted. However, where variable access patterns must be permitted, other compression techniques must be used. For the special case of random access to elements of sparse matrices, data structures and compiler technology already exist. Our approach is much more general, using compressive hash functions to implement random access lookup tables. Techniques that can be used to improve the effectiveness of any compression method in reducing memory access entropy also are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. iAPX 432 General Data Processor Architecture Reference Manual, Intel, Appendix A.2, pp. A-13 - A-22 (January 1981)

    Google Scholar 

  2. Bik, A.J.C., Wijshoff, H.A.G.: Advanced Compiler Optimizations for Sparse Computations. Journal of Parallel and Distributed Computing 31(1), 14–24 (1995)

    Article  Google Scholar 

  3. Ju, Y.-J., Dietz, H.G.: Reduction of Cache Coherence Overhead by Compiler Data Layout and Loop Transformation. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds.) Languages and Compilers for Parallel Computing, pp. 344–358. Springer, New York (1992)

    Chapter  Google Scholar 

  4. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  5. Massalin, H.: Superoptimizer — a look at the smallest program. In: ASPLOS II, pp. 122–126 (1987)

    Google Scholar 

  6. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C, 2nd edn., p. 284. Cambridge University Press, Cambridge (1988)

    MATH  Google Scholar 

  7. Zhang, Y., Gupta, R.: Data Compression Transformations for Dynamically Allocated Data Structures. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 14–28. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dietz, H.G., Mattox, T.I. (2005). Compiler Optimizations Using Data Compression to Decrease Address Reference Entropy. In: Pugh, B., Tseng, CW. (eds) Languages and Compilers for Parallel Computing. LCPC 2002. Lecture Notes in Computer Science, vol 2481. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11596110_9

Download citation

  • DOI: https://doi.org/10.1007/11596110_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30781-5

  • Online ISBN: 978-3-540-31612-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics