Skip to main content

Computing communication sets for control parallel programs

  • Automatic Parallelization Considered Unnecessary
  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 892))

Abstract

Data flow analysis has been used by compilers in diverse contexts, from optimization to register allocation. Traditional analysis of sequential programs has centered on scalar variables. More recently, several researchers have investigated analysis of array sections for optimizations on modern architectures. This information has been used to distribute data, optimize data movement and vectorize or parallelize programs. As multiprocessors become more common-place, we believe there will be considerable interest in explicitly parallel programming languages. In this paper, we extend traditional analysis to array section analysis for parallel languages which include additional control and synchronization structures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Frances Allen, Michael Burke, Philippe Charles, Ron Cytron, and Jeanne Ferrante. An overview of the PTRAN analysis system for multiprocessing. In Proc. 1987 International Conf. on Supercomputing, pages 194–211, Athens, Greece, June 1987.

    Google Scholar 

  2. Frances Allen, Michael Burke, Philippe Charles, Ron Cytron, and Jeanne Ferrante. An overview of the PTRAN analysis system for multiprocessing. J. Parallel and Distributed Computing, 5(5):617–640, October 1988.

    Google Scholar 

  3. Frances Allen, Michael Burke, Philippe Charles, Ron Cytron, and Jeanne Ferrante. An overview of the PTRAN analysis system for multiprocessing. J. Parallel and Distributed Computing, 5(5):617–640, October 1988. (update of [1]).

    Google Scholar 

  4. Robert Alverson, David Callahan, Daniel Cummings, Brian Koblenz, Allan Porterfield, and Burton Smith. The Tera Computer System. In Proc. 17th Annual Symposium on Computer Architecture, Computer Architecture News, pages 1–6, Amsterdam, Netherlands, June 1990. ACM, ACM.

    Google Scholar 

  5. W. Blume. Success and limitations in automatic parallelization of the perfect benchmark programs. Technical report, U. of IL-Center for Supercomputing Research and Development, July 1992. CSRD Rpt. No. 1249.

    Google Scholar 

  6. Shahid H. Bokhari. Partitioning problems in parallel, pipelined, and distributed computing. IEEE Transactions on Computers, C-37:48–57, January 1988.

    Google Scholar 

  7. K.M. Chandy and C.F. Kesselman. Compositional C++: Compositional parallel programming. In Proceedings of the Fourth Workshop on Parallel Computing and Compilers. Springer-Verlag, 1992.

    Google Scholar 

  8. K.M. Chandy and C.F. Kesselman. Research Directions in Object Oriented Programming, chapter CC++: A Declarative Concurrent Object Oriented Programming Notation. MIT Press, 1993.

    Google Scholar 

  9. Ron Cytron, Steve Karlovsky, and Kevin P. McAuliffe. Automatic management of programmable caches (extended abstract). Proceedings of the 1988 International Conference on Parallel Processing, August 1988. Also available as CSRD Rpt. No. 728 from U. of Ill.-Center for Supercomputing Research and Development.

    Google Scholar 

  10. D. E. Maydan, S. Amarasinghe and M. S. Lam. Array data flow analysis and its use in array privatization. In Conf. Record 20th Annual ACM Symp. Principles of Programming Languages, Charleston, South Carolina, January 1993.

    Google Scholar 

  11. Dirk Grunwald and Harini Srinivasan. Data Flow Equations for Explicitly Parallel Programs. In Conf. Record ACM Symp. Principles and Practice of Parallel Programming, San Diego, California, May 1993.

    Google Scholar 

  12. E. Duesterwald, Rajiv Gupta, and Mary Lou Soffa. A practical data flow framework for array reference analysis and its use in optimizations. Proceedings of the SIGPLAN 93 conference on programming language design and implementation, pages 68–77, June 1993.

    Google Scholar 

  13. R. Eigenmann and W. Blume. An effectiveness study of parallelizing compiler techniques. Proceedings of the International Conference on Parallel Processing, August 1991.

    Google Scholar 

  14. Jeanne Ferrante, Dirk Grunwald, and Harini Srinivasan. Array Section Analysis for Control Parallel Programs. Technical Report CU-CS-684-93, University of Colorado at Boulder., November 1993.

    Google Scholar 

  15. A. Gerasoulis, Venugopal, and T. Yang. Clustering task graphs for message passing architectures. Proceedings of the 4th ACM Inter. Conf. on Supercomputing (ICS 90), pages 447–456, June 1990.

    Google Scholar 

  16. J. E. Gilbert. An Investigation of the Partitioning Algorithms Across an MIMD Computing System. Technical Report Technical Report 176, Stanford University, Computer Systems Laboratory, May 1980.

    Google Scholar 

  17. Milind Girkar and Constantine Polychronopoulos. Partitioning programs for parallel execution. Technical report, U. of IL-Center for Supercomputing Research and Development, July 1988. CSRD Rpt. No. 765 Also in Proc. of ACM 1988 Int'l. Conf. on Supercomputing, St. Malo, France, July 4–8, 1988, pp. 216–229.

    Google Scholar 

  18. E. Granston and A. Veidenbaum. Detecting Redundant Accesses to Array Data. In Proc. of Supercomputing 1991, pages 854–965, Albuquerque, NM, November 1991.

    Google Scholar 

  19. Thomas Gross and Peter Steenkiste. Structured Dataflow Analysis for Arrays and its Use in an Optimizing Compiler. Software — Practice and Experience, 20(2):133–155, February 1990.

    Google Scholar 

  20. Manish Gupta and Edith Schonberg. A Framework for Exploiting Data Availability to Optimize Communications. In Proc. of the Sixth Workshop on Languages and Compilers for Parallel Computing [41].

    Google Scholar 

  21. P. Brinch Hansen. The Programming Language Concurrent Pascal. IEEE Transactions on Software Engineering, 1(2):199–206, June 1975.

    Google Scholar 

  22. P. Havlak and K. Kennedy. An Implementation of Interprocedural Bounded Regular Section Analysis. IEEE Trans. on Parallel and Distributed Systems, 2(3):350–360, July 1991.

    Google Scholar 

  23. Matthew S. Hecht. Flow Analysis of Computer Programs. North Holland, New York, 1977.

    Google Scholar 

  24. S. Flynn Hummel and R. Kelly. A rationale for massively parallel programming with sets. Journal of Programming Languages, 1993. Published by Chapman and Hall. To appear.

    Google Scholar 

  25. IBM. Parallel Fortran Language and Library Reference, March 1988. Pub. No. SC23-0431-0.

    Google Scholar 

  26. Harry F. Jordan, Muhammad S. Benten, Gita Alaghband, and Ruediger Jakob. The force: A highly portable parallel programming language. In Emily C. Plachy and Peter M. Kogge, editors, Proc. 1989 International Conf. on Parallel Processing, volume II, pages II-112–II-117, St. Charles, IL, August 1989.

    Google Scholar 

  27. B. W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal, pages 291–307, February 1970.

    Google Scholar 

  28. C. Koelbel. Compiling Programs for Nonshared Memory Machines. Technical report, Purdue University, August 1990.

    Google Scholar 

  29. C. McCreary and H. Gill. Automatic determination of grain size for efficient parallel processing. CACM, 32(9):1073–1078, September 1989.

    Google Scholar 

  30. Parallel Computing Forum. PCF FORTRAN. FORTRAN Forum, 10(3), September 1991. special issue.

    Google Scholar 

  31. Jih-Kwon Peir. Program Partitioning and Synchronization on Multiprocessor Systems. PhD thesis, University of Illinois at Urbana-Champaign, March 1986. Report No. UIUCDCS-R-86-1259.

    Google Scholar 

  32. Dick Pountain and David May. A tutorial introduction to OCCAM Programming. March 1987.

    Google Scholar 

  33. Kendall Square Research. Technical summary. Technical report, Kendall Square Research, 1992.

    Google Scholar 

  34. M. Rinard, D. Scales, and M. Lam. Heterogeneous Parallel Programming in Jade. In Proc. of IEEE Supercomputing 92, pages 245–258, Nov. 1992.

    Google Scholar 

  35. Vivek Sarkar. Partitioning and Scheduling Parallel Programs for Multiprocessors. Pitman, London and The MIT Press, Cambridge, Massachusetts, 1989. In the series, Research Monographs in Parallel and Distributed Computing. This monograph is a revised version of the author's Ph.D. dissertation published as Technical Report CSL-TR-87-328, Stanford University, April 1987.

    Google Scholar 

  36. Vivek Sarkar. Automatic Partitioning of a Program Dependence Graph into Parallel Tasks. IBM Journal of Research and Development, 35(5/6):779–804, September/November 1991.

    Google Scholar 

  37. Vivek Sarkar and John Hennessy. Partitioning parallel programs for macro-dataflow. ACM Conference on Lisp and Functional Programming, pages 202–211, August 1986.

    Google Scholar 

  38. L. Snyder. Type architecture, shared memory, and the corollary of modest potential. Annual Review of Computer Science, pages 289–318, 1986.

    Google Scholar 

  39. Harini Srinivasan. Optimizing Explicitly Parallel Programs. PhD thesis, University of Colorado, Aug 1994. in preparation.

    Google Scholar 

  40. Harini Srinivasan and Michael Wolfe. Analyzing programs with explicit parallelism. In Utpal Banerjee, David Gelernter, Alexandra Nicolau, and David A. Padua, editors, Languages and Compilers for Parallel Computing, pages 405–419. Springer-Verlag, 1992.

    Google Scholar 

  41. Proc. of the Sixth Workshop on Languages and Compilers for Parallel Computing, Portland, OR, August 1993.

    Google Scholar 

  42. T. Yang and A. Gerasoulis. Pyrros: Static task scheduling and code generation for message passing multiprocessors. In Proc. of 6th ACM Inter. Conf. on Supercomputering (ICS 92), pages 428–437, July 1992.

    Google Scholar 

  43. T. Yang and A. Gerasoulis. A Fast Static Scheduling Algorithm for DAGs on an Unbounded Number of Processors. In Proc. of IEEE Supercomputing 91, pages 633–642, Nov. 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Keshav Pingali Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ferrante, J., Grunwald, D., Srinivasan, H. (1995). Computing communication sets for control parallel programs. In: Pingali, K., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1994. Lecture Notes in Computer Science, vol 892. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0025887

Download citation

  • DOI: https://doi.org/10.1007/BFb0025887

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58868-9

  • Online ISBN: 978-3-540-49134-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics