Abstract
Many-core architecture draws much attention in HPC community towards the Exascale era. Many ongoing research activities using GPU or the Many Integrated Core (MIC) architecture from Intel exist worldwide. Many-core CPUs have a great deal of impact to improve computing performance, however, they are not favorable for heavy communications and I/Os which are essential for MPI operations in general.
We have been focusing on the MIC architecture as many-core CPUs to realize a hybrid parallel computer in conjunction with multi-core CPUs. We propose a delegation mechanism for scalable MPI communications issued on many-core CPUs so as to play delegated operations on multi-core ones. This architecture also minimizes memory utilization of not only many-core CPUs but also multi-core ones by deploying multi-layered MPI communicator information. Here we evaluated the delegation mechanism on an emulated hybrid computing environment. We show our innovative design and its performance evaluation on the emulated environment in this paper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Balaji, P., Buntinas, D., Goodell, D., Gropp, W., Hoefler, T., Kumar, S., Lusk, E.L., Thakur, R., Träff, J.L.: MPI on millions of cores. Parallel Processing Letters 21(1), 45–60 (2011)
Broquedis, F., Clet-Ortega, J., Moreaud, S., Furmento, N., Goglin, B., Mercier, G., Thibault, S., Namyst, R.: hwloc: A generic framework for managing hardware affinities in hpc applications. In: Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, PDP 2010, Pisa, Italy, February 17-19, pp. 180–186. IEEE Computer Society (2010)
Goodell, D., Gropp, W., Zhao, X., Thakur, R.: Scalable Memory Use in MPI: A Case Study with MPICH2. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds.) EuroMPI 2011. LNCS, vol. 6960, pp. 140–149. Springer, Heidelberg (2011)
Hursey, J., Squyres, J.M., Dontje, T.: Locality-aware parallel process mapping for multi-core hpc systems. In: Proceedings of 2011 IEEE International Conference on Cluster Computing (CLUSTER), Austin, TX, USA, September 26-30, pp. 527–531. IEEE (2011)
Intel: Intel unveils new product plans for high-performance computing (2010), http://www.intel.com/pressroom/archive/releases/2010/20100531comp.htm
Kamal, H., Mirtaheri, S., Wagner, A.: Scalability of communicators and groups in MPI. In: Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, pp. 264–275. ACM, New York (2010)
KNEM: High-performance intra-node MPI communication, http://runtime.bordeaux.inria.fr/knem/
MPI Forum, http://www.mpi-forum.org/
Open MPI: Open Source High Performance Computing, http://www.open-mpi.org/
Sato, M., Fukazawa, G., Nagamine, K., Sakamoto, R., Namiki, M., Yoshinaga, K., Tsujita, Y., Hori, A., Ishikawa, Y.: A design of hybrid operating system for a parallel computer with multi-core and many-core processors. Accepted to ROSS 2012 (2012)
Texas Advanced Computing Center: Stampede, http://www.tacc.utexas.edu/stampede
The Ohio State University: MVAPICH: MPI over InfiniBand, 10GigE/iWARP and RoCE, http://mvapich.cse.ohio-state.edu/index.shtml
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yoshinaga, K., Tsujita, Y., Hori, A., Sato, M., Namiki, M., Ishikawa, Y. (2012). Delegation-Based MPI Communications for a Hybrid Parallel Computer with Many-Core Architecture. In: Träff, J.L., Benkner, S., Dongarra, J.J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2012. Lecture Notes in Computer Science, vol 7490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33518-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-33518-1_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33517-4
Online ISBN: 978-3-642-33518-1
eBook Packages: Computer ScienceComputer Science (R0)