Skip to main content

High-Level Data Mapping for Clusters of SMPs

  • Conference paper
  • First Online:
High-Level Parallel Programming Models and Supportive Environments (HIPS 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2026))

Abstract

Clusters of shared-memory multiprocessors (SMPs) have become the most promising parallel computing platforms for scientific computing. However, SMP clusters significantly increase the complexity of user application development when using the low-level application programming interfaces MPI and OpenMP, forcing users to deal with both distributed-memory and shared-memory parallelization details. In this paper we present extensions of High Performance Fortran for SMP clusters which enable the compiler to adopt a hybrid parallelization strategy, efficiently combining distributed-memory with shared-memory parallelism. By means of a small set of new language features, the hierarchical structure of SMP clusters may be specified. This information is utilized by the compiler to derive inter-node data mappings for controlling distributed-memory parallelization across the nodes of a cluster, and intra-node data mappings for extracting shared-memory parallelism within nodes. Additional mechanisms are proposed for specifying interand intra-node data mappings explicitly, for controlling specific SM parallelization issues, and for integrating OpenMP routines in HPF applications. The proposed features are being realized within the ADAPTOR and VFC compiler. The parallelization strategy for clusters of SMPs adopted by these compilers is discussed as well as a hybrid-parallel execution model based on a combination of MPI and OpenMP. Early experimental results indicate the effectiveness of the proposed features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Benkner. VFC: The Vienna Fortran Compiler. Scientific Programming, 7(1):67–81, 1999.

    Google Scholar 

  2. S. Benkner and T. Brandes. Exploiting Data Locality on Scalable Shared Memory Machines with Data Parallel Programs. In Euro-Par 2000 Parallel Processing, Munich, pages 647–657. Lecture Notes in Computer Science (1900), September 2000.

    Chapter  Google Scholar 

  3. J. Bircsak, P. Craig, R. Crowell, Z. Cvetanovic, J. Harris, C. Nelson, and C. Offner. Extending OpenMP for NUMA Machines. In Proceedings of SC 2000: High Performance Networking and Computing Conference, Dallas, November 2000.

    Google Scholar 

  4. T. Brandes and F. Zimmermann. ADAPTOR-A Transformation Tool for HPF Programs. In K. M. Decker and R. M. Rehmann, editors, Programming Environments for Massively Parallel Distributed Systems, pages 91–96. Birkhauser Verlag, April 1994.

    Google Scholar 

  5. F. Cappello and D. Etieble. MPI versus MPI+OpenMP on the IBM SP for the NAS Benchmarks. In Proceedings of SC 2000: High Performance Networking and Computing Conference, Dallas, November 2000.

    Google Scholar 

  6. B. Chapman, P. Mehrotra, and H. Zima. Enhancing OpenMP with Features for Locality Control. In Proc. ECWMF Workshop “Towards Teracomputing-The Use of Parallel Processors in Meteorology”, 1998.

    Google Scholar 

  7. E. Dockner, H. Moritsch, G. Ch. Pflug, and A. Swietanowski. AURORA financial management system: From Model Design to Implementation. Technical report AURORA TR1998-08, University of Vienna, June 1998.

    Google Scholar 

  8. O. Haan. Matrix Transpose with Hybrid OpenMP / MPI Parallelization.Technical Report Presentation given at SCICOMP 2000,http://www.spscicomp.org/2000/userpres.html\#haan, 2000.

  9. D. S. Henty. Performance of Hybrid Message-Passing and Shared-Memory Parallelism for Discrete Element Modeling. In Proceedings of SC 2000: High Performance Networking and Computing Conference, Dallas, November 2000.

    Google Scholar 

  10. High Performance Fortran Forum. High Performance Fortran Language Specification. Version 2.0, Department of Computer Science, Rice University, January 1997.

    Google Scholar 

  11. Y. Hu, H. Lu, A. Cox, and W. Zwaenepel. Openmp for networks of smps. In Proceedings of IPPS., 1999.

    Google Scholar 

  12. M. Leair, J. Merlin, S. Nakamoto, V. Schuster, and M. Wolfe. Distributed OMP-A Programming Model for SMP Clusters. In Eighth International Workshop on Compilers for Parallel Computers, pages 229–238, Aussois, France, January 2000.

    Google Scholar 

  13. Message Passing Interface Forum. MPI: A Message-Passing Interface Standard. Vers. 1.1, June 1995. MPI-2: Extensions to the Message-Passing Interface, 1997.

    Google Scholar 

  14. H. Moritsch and S. Benkner. High Performance Numerical Pricing Methods. In 4-th Intl. HPF Users Group Meeting, Tokyo, October 2000.

    Google Scholar 

  15. M. Sato, S. Satoh, K. Kusano, and Y. Tanaka. Design of openmp compiler for an smp cluster. In Proceedings EWOMP’ 99, pp.32–39., 1999.

    Google Scholar 

  16. Silicon Graphics Inc. MIPSpro Power Fortran 77 Programmer’s Guide: OpenMP Multriprocessing Directives. Technical Report Document 007-2361-007, 1999.

    Google Scholar 

  17. The OpenMP Forum. OpenMP Fortran Application Program Interface. Version 1.1, November 1999. http://www.openmp.org.

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Benkner, S., Brandes, T. (2001). High-Level Data Mapping for Clusters of SMPs. In: Mueller, F. (eds) High-Level Parallel Programming Models and Supportive Environments. HIPS 2001. Lecture Notes in Computer Science, vol 2026. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45401-2_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-45401-2_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41944-0

  • Online ISBN: 978-3-540-45401-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics