Skip to main content

Exploiting locality in LT-RAM computations

  • Conference paper
  • First Online:
Algorithm Theory — SWAT '94 (SWAT 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 824))

Included in the following conference series:

Abstract

As processor speeds continue to increase the primary assumption of the RAM model, that all memory locations may be accessed in unit time, becomes unrealistic. In the following we consider an alternative model, called the Limiting Technology or LT-RAM, where the cost of accessing a given memory location is dependent on the size of the memory module. In general, computations which are performed on an LT-RAM with a memory of size n × n will result in execution times which are n times slower than a comparable RAM, if no special precautions are taken. Here we provide a general technique by which, for a class of algorithms, this slow-down can be reduced to O(26·log 1/2 n) for sequential memory access, or to just O(1) if the memory access can be pipelined.

This research was partially supported by the Leonardo Fibonacci Institute for the Foundations of Computer Science, and by EC Cooperative Action IC-1000 (project ALTEC: Algorithms for Future Technologies).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aggarwal, A., Chandra, A., Snir, M, ‘On Communication Latency in PRAM Computations,’ Proc. Symp. on Parallel Algorithms and Architectures, pp. 11–21, 1989.

    Google Scholar 

  2. Bilardi, G., Pracchi, M., F.P. Preparata, ‘A Critique of Network Speed in VLSI Models of Computation,’ IEEE Journal of Solid-State Circuits, Vol. 17, pp. 696–702, 1982.

    Google Scholar 

  3. Bilardi, G., F.P. Preparata, ‘Horizons of Parallel Computation,’ Proc. Int. Conf. for 25th Anniversary of INRIA, Bensoussan, Verjus (Eds.), Paris, France, 1992.

    Google Scholar 

  4. Goodman, J., ‘Using Cache Memory to Reduce Processor-Memory Traffic,’ Proc. of 10th Int. Symp. on Computer Architecture, pp. 124–131, 1983.

    Google Scholar 

  5. Leighton, T., Introduction to Parallel Algorithms and Architectures: Arrays-Trees-Hypercubes, Morgan-Kaufmann Publishers, San Mateo, California, 1992.

    Google Scholar 

  6. Karzanov, A.V., ‘Determining the Maximal Flow in a Network with the Method of Preflows,’ Soviet Math. Dokl. 15, pp. 434–437, 1974.

    Google Scholar 

  7. Kaufmann, M., J.F. Sibeyn, T. Suel, ‘Derandomizing Routing and Sorting Algorithms for Meshes,’ Proc. xth Symposium on Discrete Algorithms, pp. 669–679, ACM-SIAM, 1994.

    Google Scholar 

  8. Malhotra, V.M., M.P. Kumar, S.N. Maheshwari, ‘An OV¦3) Algorithm for Finding Maximum Flows in Networks,’ Inf. Proc. Letters, 7, no. 6, pp. 277–278, 1978.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Erik M. Schmidt Sven Skyum

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sibeyn, J.F., Harris, T. (1994). Exploiting locality in LT-RAM computations. In: Schmidt, E.M., Skyum, S. (eds) Algorithm Theory — SWAT '94. SWAT 1994. Lecture Notes in Computer Science, vol 824. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58218-5_31

Download citation

  • DOI: https://doi.org/10.1007/3-540-58218-5_31

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58218-2

  • Online ISBN: 978-3-540-48577-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics