Abstract
Hpcfolder is a user-friendly high-performance computing tool that can be used to analyze the performance of algorithms parallelized using MPI. It is possible to view the parallel algorithm’s performance in a python notebook file that reads data values from a file and plots graphs by performing calculations on the data values. These plots that are generated by the python script inside the python script in a python notebook are useful in understanding changes in various performance metrics with variation in the number of processes and the problem size. A comparative study was also performed between two simple algorithms which are the multiplication using repeated addition algorithm (which is not computationally intensive) and the matrix multiplication algorithm (which is computationally intensive), and meaningful insights were provided to show the difference in performance of both the algorithms with a variation in the problem size and the number of processors. The goal of this paper is to present how a simple tool can be developed from scratch that can help users analyze the performance of parallel algorithms using MPI.
















Similar content being viewed by others
References
Chaudhury B, Varma A, Keswani Y, Bhatnagar Y, Parikh S (2018) Let’s hpc: A web-based platform to aid parallel, distributed and high performance computing education. J Parallel Distrib Comput 118:213–232
Ajkunic E, Fatki´CH, Omerovic E, Talic K, Nosovic N (2012) A comparison of five parallel programming models for c++. pp 1780–1784
Nielsen F (2016) Introduction to MPI: the message passing interface, pp 21–62. https://doi.org/10.1007/978-3-319-21903-5
Bryan BA (2013) High-performance computing tools for the integrated assessment and modelling of social–ecological systems. Environ Model Softw 39:295–303
Collette M, Corey I, Johnson J (2005) High performance tools and technologies. Tech. rep., Lawrence Livermore National Lab., Livermore, CA
Bomatpalli T, Wagh R, Balaji S (2015) High performance computing and big data analytics paradigms and challenges. Int J Comput Appl 116, 28–33
Eijkhout V, van de Geijn R, Chow E (2016) Introduction to high performance scientific. Computing. https://doi.org/10.5281/zenodo.49897
A, B., A, R.: Powerful mathematical tools for solving complex problems in high performance computing (05 2013). https://doi.org/10.13140/2.1.4767.2641
de Schryver C, Marxen H, Weithoffer S, Wehn N (2013) High-performance hardware acceleration of asset simulations. In: High-performance computing using FPGAs, pp 3–32
Tian X, Benkrid K (2013) Monte-carlo simulation-based financial computing on the maxwell fpga parallel machine. In: High-performance computing using FPGAs, pp 33–80. Springer
Wienbrandt L (2013) Bioinformatics applications on the fpga-based high-performance computer rivyera. In: High-performance computing using FPGAs, pp 81–103. Springer
Khan MA, Chiu M, Herbordt MC (2013) Fpga-accelerated molecular dynamics. In: High-performance computing using FPGAs, pp. 105–135. Springer
Yamaguchi Y, Osana Y, Yoshimi M, Amano H (2013) Fpga-based hprc for bioinformatics applications. In: High-performance computing using FPGAs, pp 137–175. Springer
Li W, Chan R, Zhang W, Yu C, Song D, Berger T, Cheung RC (2014) High performance computing for neuroinformatics using fpga. In: High-performance computing using FPGAs pp 177–207. https://doi.org/10.1007/978-1-4614-1791–06
Vanderbauwhede W, Chalamalasetti SR, Margala M (2013) High-performance FPGA accelerated real-time search, pp 209–244 (2013). https://doi.org/10.1007/978-1-4614–1791–07
Gorsky S, Kostromin R, Feoktistov A, Bychkov I (2019) Orlando tools: supporting high-performance computing in distributed environments. In: 2020 International Conference on Information Technology and Nanotechnology (ITNT), pp 1–6. IEEE
Li J, Ali G, Nguyen N, Hass J, Sill A, Dang T, Chen Y (2020) MonSTer: an out-of-the-box monitoring tool for high performance computing systems. In: 2020 IEEE International Conference on Cluster Computing (CLUSTER), pp. 119–129. IEEE
Mohammed A, Eleliemy A, Ciorba FM, Kasielke F, Banicescu I (2020) An approach for realistically simulating the performance of scientific applications on high performance computing systems. Futur Gener Comput Syst 111:617–633
Lin MS, Huang TC, Tsai CC, Tam KH, Hsieh KCH, Chen CF, Huang WH, Hu CW, Chen YC, Goel SK, Fu CM (2020) A 7-nm 4-GHz Arm1-core-based CoWoS1 chiplet design for high-performance computing. IEEE J Solid-State Circuits 55(4):956–966
Martínez-Cancino R, Delorme A, Truong D, Artoni F, Kreutz-Delgado K, Sivagnanam S, Yoshimoto K, Majumdar A, Makeig S (2021) The open eeglab portal interface: high-performance computing with eeglab. Neuroimage 224:116778
Shotts Jr WE (2012) The Linux command line: a complete introduction. No Starch Press
Fedotova I, Siemens E, Hu H (2013) A high-precision time handling library
Courtois PJ, Heymans F, Parnas D (1971) Concurrent control with “readers” and “writers”. Commun ACM 14, 667–668. https://doi.org/10.1145/362759.362813
Van Rossum G. Python tutorial, vol. 620
Sharma S (2012) Performance analysis of parallel algorithms on multi-core system using openmp. Int J Comput Sci Eng Inf Technol 2, 55–64. https://doi.org/10.5121/ijcseit.2012.2506
Mathew J, Vijayakumar DR (2011) The performance of parallel algorithms by amdahl ’ s law, gustafson ’ s trend (2011)
Acknowledgements
The authors are grateful to their respective institutions for the permission to publish this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Jani, K., Kumar, A. & Nahata, R. Hpcfolder: a simple tool used to parallelize algorithms using the message passing interface (MPI). J Supercomput 78, 258–278 (2022). https://doi.org/10.1007/s11227-021-03896-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-021-03896-0