Skip to main content

Delegation-Based MPI Communications for a Hybrid Parallel Computer with Many-Core Architecture

  • Conference paper
Recent Advances in the Message Passing Interface (EuroMPI 2012)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 7490))

Included in the following conference series:

Abstract

Many-core architecture draws much attention in HPC community towards the Exascale era. Many ongoing research activities using GPU or the Many Integrated Core (MIC) architecture from Intel exist worldwide. Many-core CPUs have a great deal of impact to improve computing performance, however, they are not favorable for heavy communications and I/Os which are essential for MPI operations in general.

We have been focusing on the MIC architecture as many-core CPUs to realize a hybrid parallel computer in conjunction with multi-core CPUs. We propose a delegation mechanism for scalable MPI communications issued on many-core CPUs so as to play delegated operations on multi-core ones. This architecture also minimizes memory utilization of not only many-core CPUs but also multi-core ones by deploying multi-layered MPI communicator information. Here we evaluated the delegation mechanism on an emulated hybrid computing environment. We show our innovative design and its performance evaluation on the emulated environment in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Balaji, P., Buntinas, D., Goodell, D., Gropp, W., Hoefler, T., Kumar, S., Lusk, E.L., Thakur, R., Träff, J.L.: MPI on millions of cores. Parallel Processing Letters 21(1), 45–60 (2011)

    Article  MathSciNet  Google Scholar 

  2. Broquedis, F., Clet-Ortega, J., Moreaud, S., Furmento, N., Goglin, B., Mercier, G., Thibault, S., Namyst, R.: hwloc: A generic framework for managing hardware affinities in hpc applications. In: Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, PDP 2010, Pisa, Italy, February 17-19, pp. 180–186. IEEE Computer Society (2010)

    Google Scholar 

  3. Goodell, D., Gropp, W., Zhao, X., Thakur, R.: Scalable Memory Use in MPI: A Case Study with MPICH2. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds.) EuroMPI 2011. LNCS, vol. 6960, pp. 140–149. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  4. Hursey, J., Squyres, J.M., Dontje, T.: Locality-aware parallel process mapping for multi-core hpc systems. In: Proceedings of 2011 IEEE International Conference on Cluster Computing (CLUSTER), Austin, TX, USA, September 26-30, pp. 527–531. IEEE (2011)

    Google Scholar 

  5. Intel: Intel unveils new product plans for high-performance computing (2010), http://www.intel.com/pressroom/archive/releases/2010/20100531comp.htm

  6. Kamal, H., Mirtaheri, S., Wagner, A.: Scalability of communicators and groups in MPI. In: Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, pp. 264–275. ACM, New York (2010)

    Chapter  Google Scholar 

  7. KNEM: High-performance intra-node MPI communication, http://runtime.bordeaux.inria.fr/knem/

  8. MPI Forum, http://www.mpi-forum.org/

  9. Open MPI: Open Source High Performance Computing, http://www.open-mpi.org/

  10. Sato, M., Fukazawa, G., Nagamine, K., Sakamoto, R., Namiki, M., Yoshinaga, K., Tsujita, Y., Hori, A., Ishikawa, Y.: A design of hybrid operating system for a parallel computer with multi-core and many-core processors. Accepted to ROSS 2012 (2012)

    Google Scholar 

  11. Texas Advanced Computing Center: Stampede, http://www.tacc.utexas.edu/stampede

  12. The Ohio State University: MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.edu/index.shtml

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yoshinaga, K., Tsujita, Y., Hori, A., Sato, M., Namiki, M., Ishikawa, Y. (2012). Delegation-Based MPI Communications for a Hybrid Parallel Computer with Many-Core Architecture. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33518-1_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33517-4

  • Online ISBN: 978-3-642-33518-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics