Abstract
The advent of the big data era has led to a substantial increase in available data for analysis and prediction, creating a need for effective utilization of this vast input to improve prediction quality. LSTM-based neural networks have demonstrated exceptional performance in tasks such as time series forecasting. However, the effectiveness of these models can be constrained by the limitations of GPU memory. Distributed computing has emerged as a promising solution to address the challenges posed by large-sample, long-sequence time series forecasting. This work develops a novel distributed training method for LSTM-based time series forecasting under big data scenario. Infinity norm gradient flow (INGF) is applied to speed up the convergence, acceleration techniques are designed to improve the utility rate of multiple GPUs. The study showcases significant insights into the performance of various distributed strategies and optimization techniques for batch level distributed training. As a result, we achieve an impressive tenfold increase in efficiency while making only a negligible sacrifice in accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aggregated price and demand data (2023). https://aemo.com.au/en/energy-systems/electricity
Bae, S.H., Choi, I.K., Kim, N.S.: Acoustic scene classification using parallel combination of LSTM and CNN. In: DCASE, pp. 11–15 (2016)
Box, G.E., Jenkins, G.M., Reinsel, G.C., Ljung, G.M.: Time Series Analysis: Forecasting and Control. John Wiley & Sons, Hoboken (2015)
Brownlee, J.: A gentle introduction to the rectified linear unit (ReLU). Mach. Learn. Mastery 6 (2019)
Cai, L., Yu, X., Li, C., Eberhard, A., Nguyen, L.T., Doan, C.T.: Impact of mathematical norms on convergence of gradient descent algorithms for deep neural networks learning. In: Aziz, H., Correa, D., French, T. (eds.) AI 2022: Advances in Artificial Intelligence. AI 2022. LNCS, vol. 13728, pp. 131–144. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-22695-3_10
Cao, J., Li, Z., Li, J.: Financial time series forecasting model based on CEEMDAN and LSTM. Phys. A 519, 127–139 (2019)
Chen, Z., Ma, M., Li, T., Wang, H., Li, C.: Long sequence time-series forecasting with deep learning: a survey. Inf. Fusion 97, 101819 (2023)
Cheng, H.T., et al.: Wide & deep learning for recommender systems. In: Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pp. 7–10 (2016)
Chimmula, V.K.R., Zhang, L.: Time series forecasting of COVID-19 transmission in Canada using LSTM networks. Chaos, Solitons Fractals 135, 109864 (2020)
Conrad, K.: Equivalence of norms. Expository Paper, University of Connecticut, Storrs, heruntergeladen von, vol. 17, no. 2018 (2018)
Dean, J., et al.: Large scale distributed deep networks. Adv. Neural Inf. Process. Syst. 25 (2012)
developer, N.: System management interface SMI (2023). https://developer.nvidia.com/nvidia-system-management-interface
(2023). www.tensorflow.org/api_docs/python/tf/distribute/Strategy
Fan, Y., Xu, K., Wu, H., Zheng, Y., Tao, B.: Spatiotemporal modeling for nonlinear distributed thermal processes based on kl decomposition, MLP and LSTM network. IEEE Access 8, 25111–25121 (2020)
Farsi, B., Amayri, M., Bouguila, N., Eicker, U.: On short-term load forecasting using machine learning techniques and a novel parallel deep LSTM-CNN approach. IEEE Access 9, 31191–31212 (2021)
Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: Continual prediction with LSTM. Neural Comput. 12(10), 2451–2471 (2000)
Golub, G.H., Van Loan, C.F.: Matrix Computations. JHU Press, Baltimore (2013)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Micikevicius, P., et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017)
Mohanty, S.N., Lydia, E.L., Elhoseny, M., Al Otaibi, M.M.G., Shankar, K.: Deep learning with LSTM based distributed data mining model for energy efficient wireless sensor networks. Phys. Commun. 40, 101097 (2020)
Öztürk, M.M.: Hyperparameter optimization of a parallelized LSTM for time series prediction. Vietnam J. Comput. Sci. 1–26 (2023)
Pang, B., Nijkamp, E., Wu, Y.N.: Deep learning with tensorflow: a review. J. Educ. Behav. Stat. 45(2), 227–248 (2020)
Parra, G.D.L.T., Rad, P., Choo, K.K.R., Beebe, N.: Detecting internet of things attacks using distributed deep learning. J. Netw. Comput. Appl. 163, 102662 (2020)
Parallel vs. distributed computing: an overview (2022). blog.purestorage.com/purely-informational/parallel-vs-distributed-computing-an-overview/
Quinn, M.J.: Parallel Computing Theory and Practice. McGraw-Hill, Inc., New York (1994)
Sagheer, A., Kotb, M.: Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 323, 203–213 (2019)
Stollenga, M.F., Byeon, W., Liwicki, M., Schmidhuber, J.: Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. Adv. Neural Inf. Process. Syst. 28 (2015)
Better performance with tf.function (2023). www.tensorflow.org/guide/function
Ueno, Y., Fukuda, K.: Technologies behind distributed deep learning: Allreduce (2018)
Wilson, A.C., Mackey, L., Wibisono, A.: Accelerating rescaled gradient descent: fast optimization of smooth functions. Adv. Neural Inf. Process. Syst. 32 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cai, L., Liu, C., Yu, X., Li, C., Eberhard, A. (2024). Batch Level Distributed Training of LSTM with Infinity Norm Gradient Flow. In: Bao, Z., Borovica-Gajic, R., Qiu, R., Choudhury, F., Yang, Z. (eds) Databases Theory and Applications. ADC 2023. Lecture Notes in Computer Science, vol 14386. Springer, Cham. https://doi.org/10.1007/978-3-031-47843-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-47843-7_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47842-0
Online ISBN: 978-3-031-47843-7
eBook Packages: Computer ScienceComputer Science (R0)