Skip to main content

Supporting SPMD execution for dynamic data structures

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 757))

Abstract

In this paper, we address the problem of supporting SPMD execution of programs that use recursively-defined dynamic data structures on distributed memory machines. The techniques developed for supporting SPMD execution of array-based programs rely on the fact that arrays are statically defined and directly addressable. As a result, these techniques do not apply to recursive data structures, which are neither statically defined nor directly addressable. We propose a three pronged approach. First, we describe a simple mechanism for migrating a thread of control based on the layout of heap allocated data. Second, we explain how to introduce parallelism into the model using a technique based on futures and lazy task creation[21]. Third, we present the compiler analyses and parallelization techniques that are required to exploit the proposed mechanism.

This work was supported, in part, by NSF Grant ASC-9110766 and by DARPA and ONR under contract number N00014-91-J-4039.

The research supported, in part, by FCAR, NSERC, and the McGill Faculty of Graduate Studies and Research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Allen, M. Burke, P. Charles, R. Cytron, and J. Ferrante. An overview of the PTRAN analysis system for multiprocessing. J. of Parallel and Distributed Computing, 5:617–640, 1988.

    Google Scholar 

  2. J.R. Allen and K. Kennedy. Automatic translation of FORTRAN programs to vector form ACM Transactions on Programming Languages and Systems, 9(4):491–542, October 1987.

    Google Scholar 

  3. A. W. Appel and K. Li. Virtual memory primitives for user programs. In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 96–107, April 1991.

    Google Scholar 

  4. D. Callahan and K. Kennedy. Compiling programs for distributed memory multiprocessors. The Journal of Supercomputing, 2(2), October 1988.

    Google Scholar 

  5. A. Deutsch. A storeless model of aliasing and its abstractions using finite representations of right-regular equivalence relations. In Proceedings of the 1992 International Conference on Computer Languages, pages 2–13, April 1992.

    Google Scholar 

  6. M. Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems. PhD thesis, University of Bonn, 1990.

    Google Scholar 

  7. R. Gupta. SPMD execution of programs with dynamic data structures on distributed memory machines. In Proceedings of the 1992 International Conference on Computer Languages, pages 232–241, April 1992.

    Google Scholar 

  8. R. Gupta and M. Epstein. High speed synchronization of processors using fuzzy barriers. International Journal of Parallel Programming, 19(1), 1990.

    Google Scholar 

  9. R. H. Halstead, Jr. Multilisp: A language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.

    Google Scholar 

  10. W. Ludwell Harrison III. The interprocedural analysis and automatic parallelization of schem e program s. Lisp and Symbolic Computation, 2(3/4):179–396, 1989.

    Google Scholar 

  11. L. J. Hendren. Parallelizing Programs with Recursive Data Structures. PhD thesis, Cornell University, January 1990.

    Google Scholar 

  12. L. J. Hendren, C. Donawa, M. Emami, G. R. Gao, Justiani, and B. Sridharan. Designing the McCAT compiler based on a family of structured representations. ACAPS Technical Memo 46, McGill University, 1992.

    Google Scholar 

  13. L. J. Hendren, J. Hummel, and A. Nicolau. Abstractions for recursive pointer data structures: Improving the analysis and transformation of imperative programs. In Proceedings of the SIGPLAN '92 Conference on Programming Language Design and Implementation, June 1992.

    Google Scholar 

  14. L. J. Hendren and A. Nicolau. Parallelizing programs with recursive data structures. IEEE Transactions on Parallel and Distributed Systems, 1(1), 1990.

    Google Scholar 

  15. L. J. Hendren, B. Sridharan, V. Sreedhar, and Y. Wong. The SIMPLE AST — McCAT compiler. ACAPS Technical Memo 36, McGill University, 1992.

    Google Scholar 

  16. S. Hiranandani, K. Kennedy, and C. Tseng. Compiler optimizations for FORTRAN D on MIMD distributed memory machines. In Proceedings of Supercomputing 91, pages 86–100, November 1991.

    Google Scholar 

  17. S. Horwitz, T. Reps, and D. Binkley. Interprocedural slicing using dependence graphs. ACM Trans. on Programming Languages and Systems, 12(1):26–60, January 1990.

    Google Scholar 

  18. C. Koelbel. Compiling Programs for Nonshared Memory Machines. PhD thesis, Purdue University, West Lafayette, IN, August 1990.

    Google Scholar 

  19. C. Koelbel, P. Mehrotra, and J. van Rosendale. Supporting shared data structures on distributed memory architectures. In Proceedings of the Second ACM SIGPLAN Symposium on the Principles and Practice of Parallel Programming, 1990.

    Google Scholar 

  20. J. R. Larus. Compiling lisp programs for parallel execution. Lisp and Symbolic Computation, 4:29–99, 1991.

    Google Scholar 

  21. E. Mohr, D. A. Kranz, and R. H. Halstead, Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems, 2(3):264–280, July 1991.

    Google Scholar 

  22. D. Padua and M. Wolfe. Advanced compiler optimizations for supercomputers. Communications of the ACM, 29(12), December 1986.

    Google Scholar 

  23. E. S. Roberts and M. T. Vandevoorde. WorkCrews: An abstraction for controlling parallelism. Technical Report 42, DEC Systems Research Center, April 1989.

    Google Scholar 

  24. A. Rogers. Compiling for Locality of Reference. PhD thesis, Cornell University, August 1990.

    Google Scholar 

  25. A. Rogers and K. Pingali. Process decomposition through locality of reference. In Proceedings of the SIGPLAN '89 Conference on Programming Language Design and Implementation, June 1989.

    Google Scholar 

  26. M. Rosing, R. Schnabel, and R. Weaver. The DINO parallel programming language. Technical Report CU-CS-457-90, University of Colorado at Boulder, April 1990.

    Google Scholar 

  27. W. Weihl, E. Brewer, A. Colbrook, C. Dellarocas, W. Hsieh, A. Joseph, C. Waldspurger, and P. Wang. Prelude: A system for portable parallel software. MIT/LCS 519, Massachusetts Institute of Technology, 1991.

    Google Scholar 

  28. M. Wolfe. Optimizing Supercompilers for Supercomputers. Pitman Publishing, London, 1989.

    Google Scholar 

  29. H. Zima, H. Bast, and M. Gerndt. SUPERB: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6(1):1–18, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rogers, A., Reppy, J., Hendren, L. (1993). Supporting SPMD execution for dynamic data structures. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1992. Lecture Notes in Computer Science, vol 757. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57502-2_48

Download citation

  • DOI: https://doi.org/10.1007/3-540-57502-2_48

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57502-3

  • Online ISBN: 978-3-540-48201-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics