Abstract
It is an important problem to map virtual parallel processes to physical processors (or cores) in an optimized way to get scalable performance due to non-uniform communication cost in modern parallel computers. Existing work uses profile-guided approaches to optimize mapping schemes to minimize the cost of point-to-point communications automatically. However, these approaches cannot deal with collective communications and may get sub-optimal mappings for applications with collective communications.
In this paper, we propose an approach called OPP (Optimized Process Placement) to handle collective communications which transforms collective communications into a series of point-to-point communication operations according to the implementation of collective communications in communication libraries. Then we can use existing approaches to find optimized mapping schemes which are optimized for both point-to-point and collective communications.
We evaluated the performance of our approach with micro-benchmarks which include all MPI collective communications, NAS Parallel Benchmark suite and three other applications. Experimental results show that the optimized process placement generated by our approach can achieve significant speedup.
This work is supported by Chinese National 973 Basic Research Program under Grant No. 2007CB310900 and National High-Tech Research and Development Plan of China (863 plan) under Grant No. 2006AA01A105.
Chapter PDF
Similar content being viewed by others
References
Colwell, R.R.: From terabytes to insights. Commun. ACM 46(7), 25–27 (2003)
Pant, A., Jafri, H.: Communicating efficiently on cluster based grids with MPICH-VMI. In: CLUSTER, pp. 23–33 (2004)
Chen, H., Chen, W., Huang, J., Robert, B., Kuhn, H.: MPIPP: an automatic profile-guided parallel process placement toolset for SMP clusters and multiclusters. In: ICS, pp. 353–360 (2006)
NASA Ames Research Center. NAS parallel benchmark NPB, http://www.nas.nasa.gov/Resources/Software/npb.html
Phinjaroenphan, P., Bevinakoppa, S., Zeephongsekul, P.: A heuristic algorithm for mapping parallel applications on computational grids. In: EGC, pp. 1086–1096 (2005)
Sanyal, S., Jain, A., Das, S.K., Biswas, R.: A hierarchical and distributed approach for mapping large applications to heterogeneous grids using genetic algorithms. In: CLUSTER, pp. 496–499 (2003)
Bhanot, G., Gara, A., Heidelberger, P., Lawless, E., Sexton, J., Walkup, R.: Optimizing task layout on the Blue Gene/L supercomputer. IBM Journal of Research and Development 49(2-3), 489–500 (2005)
Yu, H., Chung, I., Moreira, J.: Topology mapping for Blue Gene/L supercomputer. In: SC, pp. 52–64 (2006)
Kielmann, T., Hofman, R.F.H., Bal, H.E., Plaat, A., Bhoedjang, R.: MagPIe: MPI’s collective communication operations for clustered wide area systems. In: PPOPP (1999)
Sanders, P., Traff, J.L.: The hierarchical factor algorithm for all-to-all communication (research note). In: Monien, B., Feldmann, R.L. (eds.) Euro-Par 2002. LNCS, vol. 2400, pp. 799–804. Springer, Heidelberg (2002)
Sistare, S., vande Vaart, R., Loh, E.: Optimization of MPI collectives on clusters of large-scale SMP’s. In: SC, pp. 23–36 (1999)
Tipparaju, V., Nieplocha, J., Panda, D.K.: Fast collective operations using shared and remote memory access protocols on clusters. In: IPDPS, pp. 84–93 (2003)
Barnett, M., Gupta, S., Payne, D.G., Shuler, L., van de Geijn, R., Watts, J.: Interprocessor collective communication library (InterCom). In: SHPCC, pp. 357–364 (1994)
Kalé, L.V., Kumar, S., Varadarajan, K.: A framework for collective personalized communication. In: IPDPS, pp. 69–77 (2003)
Ohio State University. MVAPICH: MPI over infiniband and iWARP, http://mvapich.cse.ohio-state.edu
Bruck, J., Ho, C., Upfal, E., Kipnis, S., Weathersby, D.: Efficient algorithms for all-to-all communications in multiport message-passing systems. IEEE Trans. Parallel Distrib. 8(11), 1143–1156 (1997)
Rabenseifner, R.: New optimized MPI reduce algorithm, http://www.hlrs.de/organization/par/services/models/mpi/myreduce.html
Argonne National Laboratory. MPICH1, http://www-unix.mcs.anl.gov/mpi/mpich1
Intel Ltd. Intel IMB benchmark, http://www.intel.com/cd/software/products/asmo-na/eng/219848.htm
Huang, Z., Purvis, M.K., Werstein, P.: Performance evaluation of view-oriented parallel programming. In: ICPP, pp. 251–258 (2005)
Xue, W., Shu, J., Wu, Y., Zheng, W.: Parallel algorithm and implementation for realtime dynamic simulation of power system. In: ICPP, pp. 137–144 (2005)
Hewlett-Packard Development Company. HP-MPI user’s guide, http://docs.hp.com/en/B6060-96024/ch03s12.html
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhang, J., Zhai, J., Chen, W., Zheng, W. (2009). Process Mapping for MPI Collective Communications. In: Sips, H., Epema, D., Lin, HX. (eds) Euro-Par 2009 Parallel Processing. Euro-Par 2009. Lecture Notes in Computer Science, vol 5704. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03869-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-03869-3_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03868-6
Online ISBN: 978-3-642-03869-3
eBook Packages: Computer ScienceComputer Science (R0)