Skip to main content
Log in

MPI parameter optimization during debugging phase of HPC system

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Before the HPC system is delivered to the user, system debugging engineers need to optimize the configuration of all system parameters, including the MPI runtime parameters. This process usually follows a trial-and-error approach, takes time and requires expert insight into the subtle interactions between the software and the underlying hard fight. With the expansion of system and application scale, this work becomes more and more challenging. This paper presents a method to select MPI runtime parameters, which can be used to find the optimal MPI runtime parameters for most applications in a relatively short time. We test our approach on the SPEC MPI2007. Experimental results show that our approach achieves up to 11.93% improvement over the default setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Saini S, Ciotti R, Gunney BTN et al (2007) Performance evaluation of supercomputers using HPCC and IMB Benchmarks. J Comput Syst Sci 74(6):965–982

    Article  MathSciNet  Google Scholar 

  2. Culler David E et al (1996) LogP: a practical model of parallel computation. Commun Acm 39(11):78–85

    Article  Google Scholar 

  3. Albert Alexandrov et al (1997) LogGP: incorporating long messages into the logp model for parallel computation, J Parallel Dist Comput

  4. Geimer M, Saviankou P et al (2012) Further Improving the Scalability of the Scalasca Toolset, In Proceedings of the PARA 2010: State of the Art in Scientific and Parallel Computing, 463–473

  5. Gerndt M, Ott M (2009) Automatic performance analysis with Periscope. Conc Comput Pract Exp 22(6):736–748

    Google Scholar 

  6. Leiserson Charles E (1985) Fat-trees: universal networks for hardware-efficient supercomputing. IEEE Trans Comput 34(10):892–901

    Article  Google Scholar 

  7. Meuer H, Strohmaier E, Dongarra J (1993) TOP500. TOP500.org (c). Accessed on: 2021. [Online]. Available: http://www.top500.org

  8. Chaarawi M , Squyres J M , Gabriel E et al (2008) A tool for optimizing runtime parameters of open MPI, recent advances in parallel virtual machine and message passing interface, 15th European PVM/MPI Users’ Group Meeting, Dublin, Ireland, September 7–10, 2008. Proceedings. Springer

  9. Chameleon (1992) MPICH. Accessed on: 2021. Available: http://www.mpich.org

  10. Strohmaier E, Simon H, Dongarra J, Meuer M (2020) TOP 10 Sites for November 2020. Lawrence Berkeley National Laboratory, University of Tennessee, ISC Group. Accessed on: 2021. [Online]. Available: http://www.top500.org/lists/top500/2020/11/

  11. Chunduri S, Parker S, Balaji P et al (2018) Characterization of MPI Usage on a Production Supercomputer, SC18: International Conference for High Performance Computing. Networking, Storage and Analysis

  12. Rabenseifner R (1999) Automatic MPI counter profiling of all users: first results on a CRAY T3E 900–512. Proceedings of the Message Passing Interface Developer’s and User’s Conference 77–85

  13. Müller MS, van Waveren M, Lieberman R, Whitney B, Saito H, Kumaran K, Baron J, Brantley WC, Parrott C, Elken T, Feng H, Ponder C (2007) SPEC MPI2007. Standard Performance Evaluation Corporation. Accessed on: 2021. [Online]. Available: http://www.spec.org/mpi2007/

  14. Mueller MS, Waveren MV, Lieberman R et al (2010) SPEC MPI2007–an application benchmark suite for parallel systems using MPI. Concurrency and Computation: Practice and Experience 22(2):191–205

    Google Scholar 

  15. Liao XK, Pang ZB, Wang KF et al (2015) High performance interconnect network for tianhe system. J Comput Sci Technol

  16. Vetter J, Chambreau C (2006) mpiP. University of California. Accessed on: 2021. [Online]. Available: https://software.llnl.gov/mpiP

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China (2018YFB0204301). This article is particularly grateful to my wife Huang Hui, whose support is the driving force for my scientific research. At the same time, I would like to thank myself who worked hard in the past to do scientific research without pressure now.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Du.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Du, Q., Huang, H. MPI parameter optimization during debugging phase of HPC system. J Supercomput 78, 1696–1711 (2022). https://doi.org/10.1007/s11227-021-03939-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-021-03939-6

Keywords

Navigation