Abstract
The quest for petascale computing systems has seen cluster sizes expressed in terms of number of processor cores increase rapidly. The Message Passing Interface (MPI) has emerged as the defacto standard on these modern, large scale clusters. This has resulted in an increased focus on research into the scalability of MPI libraries. However, as clusters grow in size, the scalability and performance of job launch mechanisms need to be re-visited.
In this work, we study the information exchange involved in the job launch phase of MPI applications. With the emergence of multi-core processing nodes, we examine the benefits of caching information at the node level during the job launch phase. We propose four design alternatives for such node level caches and evaluate their performance benefits. We propose enhancements to make these caches memory efficient while retaining the performance benefits by taking advantage of communication patterns during the job startup phase. One of our cache design – Hierarchical Cache with Message Aggregation, Broadcast and LRU (HCMAB-LRU) reduces the time involved in typical communication stages to one tenth while capping the memory used to a fixed upper bound based on the number of processes. This enables scalable MPI job launching for next generation clusters with hundreds of thousands of processor cores.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This research is supported in part by U.S. Department of Energy grants #DE-FC02-06ER25749 and #DE-FC02-06ER25755; National Science Foundation grants #CNS-0403342, #CCF-0702675 and #CCF-0833169; grant from Wright Center for Innovation #WCI04-010-OSU-0; grants from Intel, Mellanox, Cisco, and Sun Microsystems; Equipment donations from Intel, Mellanox, AMD, Advanced Clustering, Appro, QLogic, and Sun Microsystems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
TOP 500 Project: TOP 500 Supercomputer Sites (2009), http://www.top500.org
Los Alamos National Laboratory: Roadrunner, http://www.lanl.gov/roadrunner/
Sandia National Laboratories: Thunderbird Linux Cluster, http://www.cs.sandia.gov/platforms/Thunderbird.html
Texas Advanced Computing Center: HPC Systems, http://www.tacc.utexas.edu/resources/hpcsystems/
Message Passing Interface Forum: MPI: A Message-Passing Interface Standard (1994)
Koop, M., Sridhar, J., Panda, D.K.: Scalable MPI Design over InfiniBand using eXtended Reliable Connection. In: IEEE Int’l Conference on Cluster Computing (Cluster 2008) (2008)
Koop, M., Jones, T., Panda, D.K.: MVAPICH-Aptus: Scalable High-Performance Multi-Transport MPI over InfiniBand. In: IEEE Int’l Parallel and Distributed Processing Symposium (IPDPS 2008) (2008)
Sur, S., Koop, M.J., Panda, D.K.: High-Performance and Scalable MPI over InfiniBand with Reduced Memory Usage: An In-Depth Performance Analysis. In: Super Computing (2006)
Sur, S., Chai, L., Jin, H.-W., Panda, D.K.: Shared Receive Queue Based Scalable MPI Design for InfiniBand Clusters. In: International Parallel and Distributed Processing Symposium (IPDPS) (2006)
Shipman, G., Woodall, T., Graham, R., Maccabe, A.: InfiniBand Scalability in Open MPI. In: International Parallel and Distributed Processing Symposium (IPDPS) (2006)
Yu, W., Gao, Q., Panda, D.K.: Adaptive Connection Management for Scalable MPI over InfiniBand. In: International Parallel and Distributed Processing Symposium (IPDPS) (2006)
Sridhar, J.K., Koop, M.J., Perkins, J.L., Panda, D.K.: ScELA: Scalable and Extensible Launching Architecture for Clusters. In: Sadayappan, P., Parashar, M., Badrinath, R., Prasanna, V.K. (eds.) HiPC 2008. LNCS, vol. 5374, pp. 323–335. Springer, Heidelberg (2008)
Network-based Computing Laboratory: MVAPICH: MPI over InfiniBand and iWARP, http://mvapich.cse.ohio-state.edu
Moody, A.: PMGR Collective Startup, http://sourceforge.net/projects/pmgrcollective
Huang, W., Santhanaraman, G., Jin, H.-W., Gao, Q., Panda, D.K.: Design of High Performance MVAPICH2: MPI2 over InfiniBand. In: Sixth IEEE International Symposium on Cluster Computing and the Grid, CCGRID 2006 (2006)
Argonne National Laboratory: MPICH2: High-performance and Widely Portable MPI, http://www.mcs.anl.gov/research/projects/mpich2/
Intel Corporation: Intel MPI Library, http://software.intel.com/en-us/intel-mpi-library/
Argonne National Laboratory: PMI v2 API, http://wiki.mcs.anl.gov/mpich2/index.php/PMI_v2_API
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sridhar, J.K., Panda, D.K. (2009). Impact of Node Level Caching in MPI Job Launch Mechanisms. In: Ropo, M., Westerholm, J., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2009. Lecture Notes in Computer Science, vol 5759. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03770-2_29
Download citation
DOI: https://doi.org/10.1007/978-3-642-03770-2_29
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03769-6
Online ISBN: 978-3-642-03770-2
eBook Packages: Computer ScienceComputer Science (R0)