Abstract
Artificial Neural Networks (ANN) are a crucial foundation for deep learning and many machine learning algorithms. Training an ANN is computationally intensive and inherently parallel, thus may be accelerated by a Graphics Processing Unit (GPU). Due to the dependency across different ANN layers, which is created by the nature of Back Propagation (BP) algorithm, it is quite challenging to design a highly efficient ANN training algorithm on GPU. In this work, we investigate and demonstrate the technology, Dynamic Parallelism (DP) and will further speed up an ANN training task on GPU. We implemented a generic ANN framework on GPU that consists of an arbitrary number of layers and an arbitrary number of nodes in each layer. In two sets of experiments, we trained the generic ANN on GPU for handwritten digit recognition with DP enabled and disabled. We observed that training ANNs on GPU with DP enabled achieved up to 12.7x performance gain, compared with that with DP disabled on GPU. After being trained on GPU, our neural network achieved an accuracy rate of 96% in handwritten digit recognition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Beckingsale, D.A., et al.: RAJA: portable performance for large-scale scientific applications. In: 2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), Denver, USA, pp. 71–81. IEEE (2019)
Dematté, L., Prandi, D.: GPU computing for systems biology. Brief. Bioinform. 11(3), 323–333 (2010)
Vouzis, P.D., Sahinidis, N.V.: GPU-BLAST: using graphics processors to accelerate protein sequence alignment. Bioinformatics 27(2), 182–188 (2011)
Yegnanarayana, B.: Artificial Neural Networks. PHI Learning Pvt. Ltd., New Delhi (2009)
Hassoun, M.H.: Fundamentals of Artificial Neural Networks. MIT Press, London (1995)
Abraham, A.: Artificial neural networks. In: Handbook of Measuring System Design. Stillwater, USA (2005)
Huqqani, A.A., Schikuta, E., Ye, S., Chen, P.: Multicore and gpu parallelization of neural networks for face recognition. Procedia Comput. Sci. 18, 349–358 (2013)
Bozorgmehr, B., Willemsen, P., Gibbs, J.A., Stoll, R., Kim, J.-J., Pardyjak, E.R.: Utilizing dynamic parallelism in CUDA to accelerate a 3D red-black successive over relaxation wind-field solver. Environ. Model. Softw. 137, 104958 (2021)
Li, X., Zhang, G., Huang, H.H., Wang, Z., Zheng, W.: Performance analysis of GPU-based convolutional neural networks. In: 2016 45th International Conference on Parallel Processing (ICPP), Philadelphia, USA, pp. 67–76. IEEE (2016)
Wang, L., et al.: Superneurons: dynamic GPU memory management for training deep neural networks. In: Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Vienna, Austria, pp. 41–53 (2018)
Li, B., et al.: Large scale recurrent neural network on GPU. In: 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, pp. 4062–4069. IEEE (2014)
Kasap, B., van Opstal, A.J.: Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations. Neurocomputing 302, 55–65 (2018)
Jarząbek, Ł, Czarnul, P.: Performance evaluation of unified memory and dynamic parallelism for selected parallel CUDA applications. J. Supercomput. 73(12), 5378–5401 (2017). https://doi.org/10.1007/s11227-017-2091-x
Wilson, D.R., Martinez, T.R.: The general inefficiency of batch training for gradient descent learning. Neural Netw. 16(10), 1429–1451 (2003)
The MNIST Database. http://yann.lecun.com/exdb/mnist/. Accessed 31 Jan 2022
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hall, W., Tian, Y. (2023). Neural Networks Training on Graphics Processing Unit (GPU) Using Dynamic Parallelism (DP). In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2022. Lecture Notes in Networks and Systems, vol 543. Springer, Cham. https://doi.org/10.1007/978-3-031-16078-3_56
Download citation
DOI: https://doi.org/10.1007/978-3-031-16078-3_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16077-6
Online ISBN: 978-3-031-16078-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)