Skip to main content
Log in

A hybrid training algorithm based on gradient descent and evolutionary computation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Back propagation (BP) is widely used for parameter search of fully-connected layers in many neural networks. Although BP has the potential of quickly converging to a solution, due to its gradient-based nature, it tends to fall into a local optimum. Metaheuristics such as evolutionary computation (EC) techniques, as gradient-free methods, may have excellent global search capability due to their stochastic nature. However, these techniques tend to perform worse than BP in terms of convergence speed. In this paper, a hybrid gradient descent search algorithm (HGDSA) is proposed for training the parameters in fully-connected neural networks. HGDSA initially searches the space extensively by means of an ensemble of gradient descent strategies in the early stage and then uses BP as an exploitative local search operator. Moreover, a self-adaptive method which selects strategies and updates the learning rates of strategies has been designed and embedded in the global search operators to prevent stagnation in local optima. To verify the effectiveness of HGDSA, experiments were performed on eleven classification datasets. Experimental results demonstrate that the proposed HGDSA possesses both powerful global and local search abilities. Furthermore, the proposed approach appears to be promising also on high-dimensional datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and materials

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request

References

  1. Çelik E, Uzun Y, Kurt E, Öztürk N, Topaloğlu N (2018) A neural network design for the estimation of nonlinear behavior of a magnetically-excited piezoelectric harvester. Journal of Electronic Materials 47:4412–4420

    Article  Google Scholar 

  2. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25:1097–1105

    Google Scholar 

  3. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778

  4. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4700–4708

  5. Jian J, Gao Z, Kan T (2022) Parameter training methods for convolutional neural networks with adaptive adjustment method based on borges difference. IEEE Transactions on Signal Processing 70:673–685

    Article  MathSciNet  Google Scholar 

  6. Yang Z (2022) Fmfo: Floating flame moth-flame optimization algorithm for training multi-layer perceptron classifier. Applied Intelligence 1–21

  7. Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Applied Intelligence 43(1):150–161

    Article  Google Scholar 

  8. Gupta S, Deep K (2020) A novel hybrid sine cosine algorithm for global optimization and its application to train multilayer perceptrons. Applied Intelligence 50(4):993–1026

    Article  Google Scholar 

  9. Cai B, Zhu X, Qin Y (2021) Parameters optimization of hybrid strategy recommendation based on particle swarm algorithm. Expert Systems with Applications 168:114388

    Article  Google Scholar 

  10. Yang S, Tian Y, He C, Zhang X, Tan KC, Jin Y (2021) A gradient-guided evolutionary approach to training deep neural networks. IEEE Transactions on Neural Networks and Learning Systems

  11. Yang L, Shami A (2020) On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 415:295–316

    Article  Google Scholar 

  12. Gong Y-J, Li J-J, Zhou Y, Li Y, Chung HS-H, Shi Y-H, Zhang J (2016) Genetic learning particle swarm optimization. IEEE Transactions on Cybernetics 46(10):2277–2290

    Article  Google Scholar 

  13. Mirjalili S (2019) Genetic algorithm. In: Evolutionary Algorithms and Neural Networks, pp 43–55

  14. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

    Article  MATH  Google Scholar 

  15. Ojha VK, Abraham A, Snášel V (2017) Metaheuristic design of feedforward neural networks: A review of two decades of research. Engineering Applications of Artificial Intelligence 60:97–116

    Article  Google Scholar 

  16. Junru S, Qiong W, Muhua L, Zhihang J, Ruijuan Z, Qingtao W (2022) Decentralized multi-task reinforcement learning policy gradient method with momentum over networks. Applied Intelligence 1–15

  17. Wilson AC, Roelofs R, Stern M, Srebro N, Recht B (2017) The marginal value of adaptive gradient methods in machine learning. Advances in Neural Information Processing Systems 30

  18. Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Networks 12(1):145–151

    Article  Google Scholar 

  19. Khosravi H, Saedi SI, Rezaei M (2021) Real-time recognition of on-branch olive ripening stages by a deep convolutional neural network. Scientia Horticulturae 287:110252

    Article  Google Scholar 

  20. Xu D, Zhang S, Zhang H, Mandic DP (2021) Convergence of the rmsprop deep learning method with penalty for nonconvex optimization. Neural Networks 139:17–23

    Article  Google Scholar 

  21. Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: ICLR

  22. Zaheer R, Shaziya H (2019) A study of the optimization algorithms in deep learning. In: 2019 Third International Conference on Inventive Systems and Control (ICISC), pp 536–539

  23. Caraffini F, Neri F, Epitropakis M (2019) Hyperspam: A study on hyper-heuristic coordination strategies in the continuous domain. Information Sciences 477:186–202

    Article  Google Scholar 

  24. Sun Y, Xue B, Zhang M, Yen GG (2019) Evolving deep convolutional neural networks for image classification. IEEE Transactions on Evolutionary Computation 24(2):394–407

    Article  Google Scholar 

  25. Oliveira S, Hussin MS, Roli A, Dorigo M, Stützle T (2017) Analysis of the population-based ant colony optimization algorithm for the tsp and the qap. In: IEEE Congress on Evolutionary Computation, pp 1734–1741

  26. Amirsadri S, Mousavirad SJ, Ebrahimpour-Komleh H (2018) A levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Computing and Applications 30(12):3707–3720

    Article  Google Scholar 

  27. Morales-Castañeda B, Zaldívar D, Cuevas E, Fausto F, Rodríguez A (2020) A better balance in metaheuristic algorithms: Does it exist? Swarm and Evolutionary Computation 54:100671

    Article  Google Scholar 

  28. Xu F, Pun CM, Li H, Zhang Y, Song Y, Gao H (2020) Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing 416:69–84

    Article  Google Scholar 

  29. Neri F, Cotta C (2012) Memetic algorithms and memetic computing optimization: A literature review. Swarm and Evolutionary Computation 2:1–14

    Article  Google Scholar 

  30. Liao TW (2010) Two hybrid differential evolution algorithms for engineering design optimization. Applied Soft Computing 10(4):1188–1199

    Article  Google Scholar 

  31. Xue Y, Wang Y, Liang J (2022) A self-adaptive gradient descent search algorithm for fully-connected neural networks. Neurocomputing

  32. Xue Y, Tong Y, Neri F (2022) An ensemble of differential evolution and adam for training feed-forward neural networks. Information Sciences 608:453–471

    Article  Google Scholar 

  33. García Ródenas R, Linares LJ, López-Gómez JA (2021) Memetic algorithms for training feedforward neural networks: an approach based on gravitational search algorithm. Neural Computing and Applications 33(7):2561–2588

    Article  Google Scholar 

  34. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning Representations by Back-propagating Errors. Nature 323(6088):533–536

    Article  MATH  Google Scholar 

  35. Dua D, Graff C (2019) UCI Machine Learning Repository. http://archive.ics.uci.edu/ml

  36. Yuan W, Hu F, Lu L (2022) A new non-adaptive optimization method: Stochastic gradient descent with momentum and difference. Applied Intelligence 1–15

  37. Chae Y, Wilke DN, Kafka D (2022) Gradient-only surrogate to resolve learning rates for robust and consistent training of deep neural networks. Applied Intelligence 1–22

  38. Xue Y, Zhu H, Liang J, Slowik A (2021) Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification. Knowledge-Based Systems 107218

  39. Zuo X, Zhang G, Tan W (2014) Self-adaptive learning PSO-based deadline constrained task scheduling for hybrid iaas cloud. IEEE Transactions on Automation Science and Engineering 2(11):564–573

    Article  Google Scholar 

  40. Guo Z, Zhang Y, Zhao X, Song X (2020) CPS-based self-adaptive collaborative control for smart production-logistics systems. IEEE Transactions on Cybernetics 51(1):188–198

    Article  Google Scholar 

  41. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research 12(7)

  42. Zhang JR, Zhang J, Lok TM, Lyu MR (2007) A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training. Applied Mathematics and Computation 185(2):1026–1037

    Article  MATH  Google Scholar 

  43. Eiben AE, Smith JE (2015) Introduction to Evolutionary Computing, Second Edition. Natural Computing Series. Springer,???

  44. Xue B, Zhang M, Browne WN (2014) Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Applied Soft Computing Journal 18:261–276

    Article  Google Scholar 

  45. Bottou L (1998) Online algorithms and stochastic approximations. In: Saad D (ed) Online learning and neural networks. Cambridge University Press, Cambridge, UK, pp 9–42

    MATH  Google Scholar 

  46. Smith LN (2017) Cyclical learning rates for training neural networks. In: IEEE Winter Conference on Applications of Computer Vision, pp 464–472

  47. Kennedy J, Eberhart R (1995) Particle swarm optimization. Proceedings of International Conference on Neural Networks 4:1942–1948

    Article  Google Scholar 

  48. Xue Y, Jiang J, Zhao B, Ma T (2018) A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Computing 22(9):2935–2952

    Article  Google Scholar 

  49. Xue Y, Xue B, Zhang M (2019) Self-adaptive particle swarm optimization for large-scale feature selection in classification. ACM Transactions on Knowledge Discovery from Data 13(5):1–27

    Article  Google Scholar 

  50. Neri F, Tirronen V (2010) Recent advances in differential evolution: a survey and experimental analysis. Artificial intelligence review 33(1):61–106

    Article  Google Scholar 

  51. Fan GF, Yu M, Dong SQ, Yeh YH, Hong WC (2021) Forecasting short-term electricity load using hybrid support vector regression with grey catastrophe and random forest modeling. Utilities Policy 73:101294

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (61876089, 61876185, 61902281, 61403206), the Natural Science Foundation of Jiangsu Province (BK20141005), and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (14KJB520025), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX22_1206)

Author information

Authors and Affiliations

Authors

Contributions

Yu Xue: Methodology, Supervision. Yiling Tong: Software, Writing - original draft, Writing - review & Editing. Ferrante Neri: Writing - Writing - review & Editing, Formal analysis

Corresponding author

Correspondence to Yu Xue.

Ethics declarations

Competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, Y., Tong, Y. & Neri, F. A hybrid training algorithm based on gradient descent and evolutionary computation. Appl Intell 53, 21465–21482 (2023). https://doi.org/10.1007/s10489-023-04595-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04595-4

Keywords

Navigation