Skip to main content

Interprocedural data flow based optimizations for compilation of irregular problems

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1033))

Abstract

Data parallel languages like High Performance Fortran (HPF) are emerging as the architecture independent mode of programming distributed memory parallel machines. In this paper, we present the interprocedural optimizations required for compiling applications having irregular data access patterns, when coded in such data parallel languages. We have developed an Interprocedural Partial Redundancy Elimination (IPRE) algorithm for optimized placement of runtime preprocessing routine and collective communication routines inserted for managing communication in such codes. We also present two new interprocedural optimizations: placement of scatter routines and use of coalescing and incremental routines.

This work was supported by NSF under grant No. ASC 9213821, by ONR under contract Numbers N00014-93-1-0158 and N000149410907, by ARPA under the Scalable I/O Project (Caltech Subcontract 9503) and by NASA/ARPA contract No. NAG-1-1485. The authors assume all responsibility for the contents of the paper.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gagan Agrawal and Joel Saltz. Interprocedural communication optimizations for distributed memory compilation. In Proceedings of the 7th Workshop on Languages and Compilers for Parallel Computing, pages 283–299, August 1994. Also available as University of Maryland Technical Report CS-TR-3264.

    Google Scholar 

  2. Gagan Agrawal and Joel Saltz. Interprocedural compilation of irregular applications for distributed memory machines. In Proceedings Supercomputing '95. IEEE Computer Society Press, December 1995. To appear. Also available as University of Maryland Technical Report CS-TR-3447.

    Google Scholar 

  3. Gagan Agrawal, Joel Saltz, and Raja Das. Interprocedural partial redundancy elimination and its application to distributed memory compilation. In Proceedings of the SIGPLAN '95 Conference on Programming Language Design and Implementation, pages 258–269. ACM Press, June 1995. ACM SIGPLAN Notices, Vol. 30, No. 6. Also available as University of Maryland Technical Report CS-TR-3446 and UMIACS-TR-95-42.

    Google Scholar 

  4. Raja Das, Joel Saltz, and Reinhard von Hanxleden. Slicing analysis and indirect access to distributed arrays. In Proceedings of the 6th Workshop on Languages and Compilers for Parallel Computing, pages 152–168. Springer-Verlag, August 1993. Also available as University of Maryland Technical Report CS-TR-3076 and UMIACS-TR-93-42.

    Google Scholar 

  5. D.M. Dhamdhere and H. Patil. An elimination algorithm for bidirectional data flow problems using edge placement. ACM Transactions on Programming Languages and Systems, 15(2):312–336, April 1993.

    Article  Google Scholar 

  6. Manish Gupta, Edith Schonberg, and Harini Srinivasan. A unified data flow framework for optimizing communication. In Proceedings of Languages and Compilers for Parallel Computing, August 1994.

    Google Scholar 

  7. Mary Hall, John M Mellor Crummey, Alan Carle, and Rene G Rodriguez. FIAT: A framework for interprocedural analysis and transformations. In Proceedings of the 6th Workshop on Languages and Compilers for Parallel Computing, pages 522–545. Springer-Verlag, August 1993.

    Google Scholar 

  8. Reinhard von Hanxleden and Ken Kennedy. Give-n-take — a balanced code placement framework. In Proceedings of the SIGPLAN '94 Conference on Programming Language Design and Implementation, pages 107–120. ACM Press, June 1994. ACM SIGPLAN Notices, Vol. 29, No. 6.

    Google Scholar 

  9. Seema Hiranandani, Ken Kennedy, and Chau-Wen Tseng. Compiling Fortran D for MIMD distributed-memory machines. Communications of the ACM, 35(8):66–80, August 1992.

    Article  Google Scholar 

  10. C. Koelbel and P. Mehrotra. Compiling global name-space parallel loops for distributed execution. IEEE Transactions on Parallel and Distributed Systems, 2(4):440–451, October 1991.

    Article  Google Scholar 

  11. E. Morel and C. Renvoise. Global optimization by suppression of partial redundancies. Communications of the ACM, 22(2):96–103, February 1979.

    Article  Google Scholar 

  12. Shamik D. Sharma, Ravi Ponnusamy, Bongki Moon, Yuan-Shin Hwang, Raja Das, and Joel Saltz. Run-time and compile-time support for adaptive irregular problems. In Proceedings Supercomputing '94, pages 97–106. IEEE Computer Society Press, November 1994.

    Google Scholar 

  13. Janet Wu, Raja Das, Joel Saltz, Harry Berryman, and Seema Hiranandani. Distributed memory compiler design for sparse problems. IEEE Transactions on Computers, 44(6):737–753, June 1995.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Chua-Huang Huang Ponnuswamy Sadayappan Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Agrawal, G., Saltz, J. (1996). Interprocedural data flow based optimizations for compilation of irregular problems. In: Huang, CH., Sadayappan, P., Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1995. Lecture Notes in Computer Science, vol 1033. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0014218

Download citation

  • DOI: https://doi.org/10.1007/BFb0014218

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60765-6

  • Online ISBN: 978-3-540-49446-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics