Skip to main content

An introduction to HPF

  • Chapter
  • First Online:
The Data Parallel Programming Model

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1132))

Abstract

This paper introduces the ideas that underly the data-parallel language High Performance Fortran (HPF). It reviews HPF's key language elements: elemental array parallelism and data mapping pragmas; the relationship between data mapping and implicit communication; the FORALL and INDEPENDENT loop mechanisms for more general data parallelism; and the standard HPF library, which adds to the richness of the array operators at the disposal of the HPF programmer. It reviews the important problem of data mapping at the procedure call interface. It also discusses interoperability with other programming models, including SPMD (single-program, multiple-data) programming.

The latter part of the paper is a review of the development of version 2.0 of HPF. The extended language, under development in 1996, includes a richer data mapping capability; an extension to the INDEPENDENT loop that allows reduction operations in the loop range; a means for directing the mapping of computation as well as data; and a way to specify concurrent execution of several parallel tasks on disjoint subsets of processors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Siegfried Benkner, Barbara M. Chapman, and Hans P. Zima. Vienna Fortran 90. In Proceedings of the Scalable High Performance Computing Conference, pages 51–59, Williamsburg, VA, April 1992. IEEE Computer Society Press.

    Google Scholar 

  2. Guy E. Blelloch. Vector Models for Data-Parallel Computing. The MIT Press, Cambridge, MA, 1990.

    Google Scholar 

  3. Ian Foster, David Kohr, and Rakesh Krishnaiyer. A High Performance Fortran binding for the Message Passing Interface. Technical report, Argonne National Laboratory, In preparation.

    Google Scholar 

  4. Ian Foster. Task Parallelism and High-Performance Languages. In A. Darte and G.-R. Perrin, editors, Infra, Lecture Notes in Computer Science, chapter 9. Springer Verlag, 1996.

    Google Scholar 

  5. S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD distributed-memory machines. Communications of the ACM, 35(8):66–80, August 1992.

    Google Scholar 

  6. Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr., and Mary E. Zosel. The High Performance Fortran Handbook. The MIT Press, Cambridge, MA, 1994.

    Google Scholar 

  7. R. Ponnusamy, Y.-S. Hwang, J. Saltz, A. Choudhary, and G. Fox. Supporting irregular distributions in Fortran 90D/HPF compilers. Technical Report CR-TR-3268, University of Maryland: Department of Computer Science, May 1994. Also available in IEEE Parallel and Distributed Technology, Spring 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Guy-René Perrin Alain Darte

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Schreiber, R.S. (1996). An introduction to HPF. In: Perrin, GR., Darte, A. (eds) The Data Parallel Programming Model. Lecture Notes in Computer Science, vol 1132. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61736-1_41

Download citation

  • DOI: https://doi.org/10.1007/3-540-61736-1_41

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61736-5

  • Online ISBN: 978-3-540-70646-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics