Abstract
Shared memory programming and distributed memory programming, are the most prominent ways of parallelize applications requiring high processing times and large amounts of storage in High Performance Computing (HPC) systems; parallel applications can be represented by Parallel Task Graphs (PTG) using Directed Acyclic Graphs (DAGs). The scheduling of PTGs in HPCS is considered a NP-Complete combinatorial problem that requires large amounts of storage and long processing times. Heuristic methods and sequential programming languages have been proposed to address this problem. In the open access paper: Scheduling in Heterogeneous Distributed Computing Systems Based on Internal Structure of Parallel Tasks Graphs with Meta-Heuristics, the Array Method is presented, this method optimizes the use of Processing Elements (PE) in a HPCS and improves response times in scheduling and mapping resource with the use of the Univariate Marginal Distribution Algorithm (UMDA); Array Method uses the internal characteristics of PTGs to make task scheduling; this method was programmed in the C language in sequential form, analyzed and tested with the use of algorithms for the generation of synthetic workloads and DAGs of real applications. Considering the great benefits of parallel software, this research work presents the Array Method using parallel programming with OpenMP. The results of the experiments show an acceleration in the response times of parallel programming compared to sequential programming when evaluating three metrics: waiting time, makespan and quality of assignments.
This research work is funded by Tecnológico Nacional de México TecNM. Special Thanks to Instituto Tecnológico El Llano Aguascalientes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Velarde Martinez, A.: Scheduling in heterogeneous distributed computing systems based on internal structure of parallel tasks graphs with meta-heuristics. Appl. Sci. 10(18), 6611 (2020)
Mochurad, L., Boyko, N., Petryshyn, N., Potokij, M., Yatskiv, M.: Parallelization of the simplex method based on the OpenMP technology. In: Lytvyn, V., et al. (ed.) Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2020), vol. I, 23–24 April 2020, Lviv, Ukraine (2020). http://ceur-ws.org/Vol-2604/paper62.pdf
Dimova, S., et al: OpenMP parallelization of multiple precision Taylor series method. Cornell University, 25 August 2019. arXiv:1908.09301v1, https://arxiv.org/pdf/1908.09301.pdf
Stpiczyński, P.: Algorithmic and language-based optimization of Marsa-LFIB4 pseudorandom number generator using OpenMP, OpenACC and CUDA. J. Parallel Distrib. Comput. 137, 238–245 (2020)
Jost, G., Jin, H., an Mey, D., Hatay, F.F.: Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster. https://ntrs.nasa.gov/api/citations/20030107321/downloads/20030107321.pdf
Rabenseifner, R., Hager, G., Jost, G.: Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, Weimar, Germany, pp. 427–436 (2009). https://doi.org/10.1109/PDP.2009.43
Jiao, Y.Y., Zhao, Q., Wang, L., Huang, G.H., Tan, F.: A hybrid MPI/OpenMP parallel computing model for spherical discontinuous deformation analysis. Comput. Geotech. 106, 217–227 (2019). https://doi.org/10.1016/j.compgeo.2018.11.004 ELSEVIER
Xafa, F., Abraham, A.: Computational models and heuristic methods for grid scheduling problems. Future Gener. Comput. Syst. 26(4), 608–621 (2010). https://doi.org/10.1016/j.future.2009.11.005
Larrañaga, P., Lozano, A.: Estimation of Distribution Algorithms A New Tool for Evolutionary Computation. Springer, Cham (2002). https://doi.org/10.1007/978-1-4615-1539-5, Hardcover ISBN: 978-0-7923-7466-4
de Supinski, B.R., et al.: The ongoing evolution of OpenMP. Proc. IEEE 106(11), 2004–2019 (2018). https://doi.org/10.1109/JPROC.2018.2853600
Kasim, H., March, V., Zhang, R., See, S.: Survey on parallel programming model. In: Cao, J., Li, M., Wu, M.Y., Chen, J. (eds.) Network and Parallel Computing, NPC 2008, Lecture Notes in Computer Science, vol. 5245, pp. 266–275. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88140-7_24
Chorley, M.J., Walker, D.W.: Performance analysis of a hybrid MPI/OpenMP application on multi-core clusters. J. Comput. Sci. 1(3), 168–174 (2010). https://doi.org/10.1016/j.jocs.2010.05.001
Baños, R., Ortega, J., Gil, C., de Toro, F., Montoya, M.G.: Analysis of OpenMP and MPI implementations of meta-heuristics for vehicle routing problems. Appl. Soft Comput. 43, 262–275 (2016). https://doi.org/10.1016/j.asoc.2016.02.035
Chapman, B., Jost, G., Van Der Pas, R.: Using OpenMP, Portable Shared Memory Parallel Programming. The MIT Press, Cambridge (2008)
Ma, H., Wang, L., Krishnamoorthy, K.: Detecting thread-safety violations in hybrid OpenMP/MPI programs. In: 2015 IEEE International Conference on Cluster Computing, Chicago, IL, USA, pp. 460–463 (2015). https://doi.org/10.1109/CLUSTER.2015.70
Kale, V., Iwainsky, Ch., Klemm, M., Müller Korndürfer, J.H., Ciorb, F.M.: Toward a standard interface for user-defined scheduling in OpenMP, August 2019. https://www.researchgate.net/publication/333971657_Toward_a_Standard_Interface_for_User-Defined_Scheduling_in_OpenMP, https://doi.org/10.1007/978-3-030-28596-8_13
Freeman, J.: Parallel Algorithms for Depth-First Search. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-91-71, October 1991
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Velarde Martínez, A. (2021). Parallelization of the Array Method Using OpenMP. In: Batyrshin, I., Gelbukh, A., Sidorov, G. (eds) Advances in Soft Computing. MICAI 2021. Lecture Notes in Computer Science(), vol 13068. Springer, Cham. https://doi.org/10.1007/978-3-030-89820-5_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-89820-5_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89819-9
Online ISBN: 978-3-030-89820-5
eBook Packages: Computer ScienceComputer Science (R0)