Skip to main content

Advertisement

Log in

How effective is the Grey Wolf optimizer in training multi-layer perceptrons

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophysics 5:115–133

    Article  MATH  MathSciNet  Google Scholar 

  2. Bebis G, Georgiopoulos M (1994) Feed-forward neural networks. Potentials, IEEE 13:27–31

    Article  Google Scholar 

  3. Kohonen T (1990) The self-organizing map. Proc IEEE 78:1464–1480

    Article  Google Scholar 

  4. Park J, Sandberg IW (1993) Approximation and radial-basis-function networks. Neural Comput 5:305–316

    Article  Google Scholar 

  5. Dorffner G (1996) Neural networks for time series processing, in Neural Network World

  6. Ghosh-Dastidar S, Adeli H (2009) Spiking neural networks. Int J Neural Syst 19:295–308

    Article  Google Scholar 

  7. Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. Mit Press

  8. Caruana R, Niculescu-Mizil A (2006) An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd international conference on Machine learning, pp 161–168

  9. Hinton GE, Sejnowski TJ (1999) Unsupervised learning: foundations of neural computation. MIT press

  10. Wang D (2001) Unsupervised learning: foundations of neural computation. AI Mag 22:101

    Google Scholar 

  11. Hertz J (1991) Introduction to the theory of neural computation. Basic Books 1

  12. Wang G-G, Guo L, Gandomi AH, Hao G-S, Wang H (2014) Chaotic krill herd algorithm. Inf Sci 274:17–34

  13. Wang G-G, Gandomi AH, Alavi AH, Hao G-S (2013) Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput App. doi:10.1007/s00521-013-1485-9

  14. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  15. Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. Springer

  16. Szu H, Hartley R (1987) Fast simulated annealing. Phys Lett A 122:157–162

    Article  Google Scholar 

  17. Mitchell M, Holland JH, Forrest S (1993) When will a genetic algorithm outperform hill climbing? In: NIPS:51–58

  18. Goldfeld SM, Quandt RE, Trotter HF (1966) Maximization by quadratic hill-climbing. Econometrica: J Econ Soc:541–551

  19. Mirjalili S, Mohd Hashim SZ, Moradian Sardroudi H (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218:11125–11137

    Article  MATH  MathSciNet  Google Scholar 

  20. Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel comput 14:347–361

    Article  Google Scholar 

  21. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training, learning vol. 6

  22. Gudise V G, Venayagamoorthy G K (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Proceedings swarm intelligence symposium, 2003. SIS’03, pp 110–117

  23. Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: 5th international conference on, Hybrid Intelligent Systems, 2005. HIS’05, p 6

  24. Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16:235–247

    Article  Google Scholar 

  25. Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks,” in Modeling decisions for artificial intelligence ed: Springer, pp 318–329

  26. Ozturk C, Karaboga D (2011) Hybrid Artificial Bee Colony algorithm for neural network training. In: 2011 IEEE Congress on, Evolutionary Computation (CEC), pp 84–88

  27. Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17:93–105

    Article  Google Scholar 

  28. Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on, Human System Interactions, pp 60–65

  29. Green II RC, Wang L, Alam M (2012) Training neural networks using central force optimization and particle swarm optimization: insights and comparisons. Expert Syst Appl 39:555–563

    Article  Google Scholar 

  30. Pereira L, Rodrigues D, Ribeiro P, Papa J, Weber SA (2014) Social-spider optimization-based artificial neural networks training and its applications for Parkinson’s disease identification. In: 2014 IEEE 27th international symposium on in computer-based medical systems (CBMS), pp 14–17

  31. Yu JJ, Lam AY, Li VO (2011) Evolutionary artificial neural network based on chemical reaction optimization. In: 2011 IEEE congress on, evolutionary computation (CEC), pp 2083–2090

  32. Pereira LA, Afonso LC, Papa JP, Vale ZA, Ramos CC, Gastaldello DS, Souza AN (2013) Multilayer perceptron neural networks training through charged system search and its Application for non-technical losses detection. In: 2013 IEEE PES conference on, innovative smart grid technologies latin America (ISGT LA), pp 1–6

  33. Moallem P, Razmjooy N (2012) A multi layer perceptron neural network trained by invasive weed optimization for potato color image segmentation. Trends Appl Sci Res 7:445–455

    Article  Google Scholar 

  34. Uzlu E, Kankal M, Akpınar A, Dede T (2014) Estimates of energy consumption in Turkey using neural networks with the teaching–learning-based optimization algorithm. Energy 75:295–303

    Article  Google Scholar 

  35. Mirjalili S, Sadiq AS (2011) Magnetic optimization algorithm for training multi layer perceptron. In: Communication Software and Networks (ICCSN), 2011 IEEE 3rd International Conference, IEEE, pp 42–46

  36. Belew RK, McInerney J, Schraudolph NN (1990) Evolving networks: Using the genetic algorithm with connectionist learning

  37. Blake C, Merz CJ (1998) {UCI} Repository of machine learning databases

  38. Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209

    Article  MathSciNet  Google Scholar 

  39. Beyer H-G, Schwefel H-P (2002) Evolution strategies–a comprehensive introduction. Nat Comput 1:3–52

    Article  MATH  MathSciNet  Google Scholar 

  40. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. Evolutionary Comput IEEE Trans 3:82–102

    Article  Google Scholar 

  41. Yao X, Liu Y (1997) Fast evolution strategies. In: evolutionary programming VI, pp 149–161

  42. Baluja S (1994) Population-based incremental learning. a method for integrating genetic search based function optimization and competitive learning, DTIC Document

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyedali Mirjalili.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mirjalili, S. How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl Intell 43, 150–161 (2015). https://doi.org/10.1007/s10489-014-0645-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-014-0645-7

Keywords

Navigation