Skip to main content

Advertisement

Log in

Radial Basis Gated Unit-Recurrent Neural Network (RBGU-RNN) Algorithm

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Radial Basis Gated Unit-Recurrent Neural Network (RBGU-RNN) algorithm is a new architecture-based on recurrent neural network which combines a Radial Basis Gated Unit within the Long Short Term Memory (LSTM) network architecture. This unit then gives an advantage to RBGU-RNN over the existing LSTM network. Firstly, given that the RBGU is just an activation unit and which do not perform any weighted operations as it should in a classical neuron unit, it has an advantage for not propagating (duplicating) error as compared to the LSTM. Secondly, due to the fact that this unit is located at the beginning of the network treatment workflow, it provides standardization to the data set, before they are run into the weighted units, which is not the case of a simple LSTM. This study then provided a theoretical and experimental comparison of the LSTM and RBGU-RNN. Indeed, using a real world call data record, precisely a survey on the end user cell network data traffic, we built up a cellular traffic prediction model. We start with ARIMA model which permit us to choose the number of time steps needed to build the RBGU-RNN prediction model that is the number of time steps needed to predict the next individual in the time series. The results show that RBGU-RNN accurately predict cellular data traffic with great success in generalization than LSTM. The R-squared statistics or determination coefficients show that \(58.31 \%\) of user traffic consumption can be explained by LSTM model, while \(96.86 \%\) of the user traffic consumption can be done by RBGU-RNN model in the training set. Likewise, in the test set, we found that \(61.24 \%\) of user traffic consumption can also be explained by LSTM model and \(95.20 \%\) can be done by RBGU-RNN. Also, the RBGU-RNN has more efficient gradient descent than the standard LSTM by analysing and experimenting the graphs given by the Mean Squared Error (MSE), the Mean Absolute Percentage Error (MAPE) and the Maximum Absolute Error (MAXAE) functions over the number of iteration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Azari A, Papapetrou P, Denic S, Peters G. Cellular traffic prediction and classification: a comparative evaluation of lstm and arima. In: International Conference on Discovery Science. Springer, pp. 129–144. 2019.

  2. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. Technical report. 2014. arXiv preprint arXiv:1409.0473.

  3. Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw. 1994;5(2):157–66.

    Article  Google Scholar 

  4. Campagne JE. l’apprentissage face à la malédiction de la grande dimension, Notes et commentaires au sujet des conférences de S. Mallat du Collège de France (2020).

  5. Cerwall P, Lundvall A, Jonsson P, Moller R, avertoft SB, Carson S, Godor I. Ericsson mobility report 2018. 2018.

  6. Chen M, Challita U, Saad W, Yin C, Debbah M. Artificial neural networks-based machine learning for wireless networks: a tutorial. IEEE Commun Surv Tutor. 2019;21(4):3039–71.

    Article  Google Scholar 

  7. Cho K, van Merrienboer B, Bahdanau D, Bengio Y. On the properties of neural machine translation: Encoder-decoder approaches. 2014. arXiv preprint arXiv:1409.1259.

  8. Deng L, Hinton G, Kingsbury B. New types of deep neural network learning for speech recognition and related applications: an overview. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, pp. 8599–8603. 2013.

  9. Du KL, Cheng KKM, Swamy MNS. A fast neural beamformer for antenna arrays. In: Proceedings of the International Conference on Communications (ICC ’02), vol. 1, pp. 139–144, New York, NY, USA. 2002.

  10. El Nashar A, El-Saidny MA, Sherif M, Design, deployment and performance of 4G-LTE networks: a practical approach. Wiley. 2014.

  11. Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press. 2016.

  12. Graves A. Generating sequences with recurrent neural networks. 2013. arXiv preprint arXiv:1308.0850

  13. Hochreiter S. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fur̈ Informatik, Lehrstuhl Prof. Brauer, Technische Universitat München. 1991.

  14. Hochreiter S, Bengio Y, Frasconi P, Schmidhuber J, Gradient flow in recurrent nets: the difficulty of learning long-term. 2001.

  15. Hyndman RJ, Athanasopoulos G. Seasonal ARIMA models. Forecasting: principles and practice. oTexts. Retrieved 19 May 2015.

  16. Jaffry S, Hasan SF, Gui X. Effective resource sharing in mobile cell environments. 2018. arXiv preprint arXiv:1808.01700.

  17. Jaffry S, Hasan SF, Gui X, Kuo YW. Distributed device discovery in prose environments. In: TENCON 2017-2017 IEEE Region 10 Conference. IEEE, pp. 614–618. 2017.

  18. Jaffry S, Hasan SF, Gui X. Shared spectrum for mobile-cell’s backhaul and access link. In: IEEE Global Communications Conference (GLOBECOM). IEEE. 2018;2018:1–6.

  19. Junyoung CC, Gulcehre KH, Cho Y, Bengio. Empirical evaluation of gated recurrent neural network on sequence modelling. 2014. arXiv:1412.3555v1[cs.NE].

  20. Khyati M. A detailed guide to 7 loss functions for machine learning algorithms with python code. 2019. https://www.analyticsvidhya.com.

  21. Kyunghyun C, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. 2014. arXiv:1406.1078.

  22. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

    Article  Google Scholar 

  23. Liao Y, Fang SC, Nuttle HLW. Relaxed conditions for radial-basis function networks to be universal approximators. Neural Netw. 2003;16(7):1019–28.

    Article  MATH  Google Scholar 

  24. Micchelli CA. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Construct Approx. 1986;2(1):11–22.

    Article  MathSciNet  MATH  Google Scholar 

  25. Olav Ø, Ole G. Benefits of Self-Organizing Networks (SON) for Mobile Operators. J Comput Netw Commun. 2022;2022:862527. https://doi.org/10.1155/2012/862527.

  26. Parwez MS, Rawat DB, Garuba M. Big data analytics for user-activity analysis and user-anomaly detection in mobile wireless network. IEEE Trans Ind Inform. 2017;13(4):2058–65.

    Article  Google Scholar 

  27. Poggio T, Girosi F. Networks for approximation and learning. Proc IEEE. 1990;78(9):1481–97.

    Article  MATH  Google Scholar 

  28. Qiu C, Zhang Y, Feng Z, Zhang P, Cui S. Spatio-temporal wireless traffic prediction with recurrent neural network. IEEE Wirel Commun Lett. 2018;7(4):554–7.

    Article  Google Scholar 

  29. Ravı D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, Yang G-Z. Deep learning for health informatics. IEEE J Biomed Health Inform. 2016;21(1):4–21.

    Article  Google Scholar 

  30. Robinson AJ, Fallside F. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department. 1987.

  31. Sepp H, Jurgen S. Long short-term memory. Neural Comput. 1997;9(8):1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  Google Scholar 

  32. Shu Y, Yu M, Yang O, Liu J, Feng H. Wireless traffic modeling and prediction using seasonal arima models. IEICE Trans Commun. 2005;88(10):3992–9.

    Article  Google Scholar 

  33. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Aidan NG, Lukasz K, Illia P. NeurIPS: Attention is all you need. 2017.

  34. Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: a brief review. Comput Intell Neurosci. 2018.

  35. Werbos PJ. Backpropagation through time: what it does and how to do it. Proc IEEE. 1990;78(10):1550–60.

    Article  Google Scholar 

  36. Williams RJ, Zipser D. Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Chauvin Y, Rumelhart DE, editors. Back-propagation: theory, architectures and Applications. Hillsdale: Erlbaum; 1992.

    Google Scholar 

  37. Xavier G, Yoshua B. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’10). Society for Artificial Intelligence and Statistics, USA (2010).

  38. Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag. 2018;13(3):55–75.

    Article  Google Scholar 

  39. Zappone A, Di Renzo M, Debbah M. Wireless networks design in the era of deep learning: model-based, ai-based, or both?. 2019. arXiv preprint arXiv:1902.02647.

  40. Zhao Y, Zhou Z, Wang X, Liu T, Liu Y, Yang Z. Celltrademap delineating trade areas for urban commercial districts with cellular networks. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019, pp. 937–945.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ndom Francis Rollin.

Ethics declarations

Conflicts of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rollin, N.F., Giquel, S., Chantal, MA. et al. Radial Basis Gated Unit-Recurrent Neural Network (RBGU-RNN) Algorithm. SN COMPUT. SCI. 5, 68 (2024). https://doi.org/10.1007/s42979-023-02376-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-023-02376-x

Keywords

Navigation