Skip to main content

Advertisement

Log in

Training feedforward neural networks using multi-verse optimizer for binary classification problems

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper employs the recently proposed nature-inspired algorithm called Multi-Verse Optimizer (MVO) for training the Multi-layer Perceptron (MLP) neural network. The new training approach is benchmarked and evaluated using nine different bio-medical datasets selected from the UCI machine learning repository. The results are compared to five classical and recent evolutionary metaheuristic algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), FireFly (FF) Algorithm and Cuckoo Search (CS). In addition, the results are compared with two well-regarded conventional gradient-based training methods: the conventional Back-Propagation (BP) and the Levenberg-Marquardt (LM) algorithms. The comparative study demonstrates that MVO is very competitive and outperforms other training algorithms in the majority of datasets in terms of improved local optima avoidance and convergence speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/

References

  1. Basheer I, Hajmeer M (2000) Artificial neural networks: fundamentals, computing, design, and application. J Microbiol Methods 43(1):3–31

    Article  Google Scholar 

  2. Braik M, Sheta A, Arieqat A (2008) A comparison between gas and pso in training ann to model the te chemical process reactor. In: AISB 2008 Convention communication, interaction and social intelligence, vol 1. citeseer, p 24

  3. Chaves AdCF, Vellasco MMB, Tanscheit R (2005) Fuzzy rule extraction from support vector machines. In: Hybrid Intelligent Systems, 2005. HIS’05. Fifth International Conference on. IEEE, pp 6–pp

  4. Czerniak J, Zarzycki H (2003) Application of rough sets in the presumptive diagnosis of urinary system diseases. In: Artificial Intelligence and Security in Computing Systems. Springer, pp 41–51

  5. Fausett LV (1994) Fundamentals of Neural Networks: Architectures, Algorithms, and Applications. Prentice Hall

  6. Gupta JN, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6):679–684

    Article  Google Scholar 

  7. Holland JH (1992) Adaptation in natural and artificial systems. MIT press cambridge, MA, USA

  8. Hornik KJ, Stinchcombe D, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366

    Article  Google Scholar 

  9. Khan K, Sahai A (2012) A comparison of ba, ga, pso, bp and lm for training feed forward neural networks in e-learning context. Int J Intell Syst Appl (IJISA) 4(7):23

    Google Scholar 

  10. Kowalski P, Ukasik S (2015) Training neural networks with krill herd algorithm. Neural Processing Letters pp 1–13, doi:10.1007/s11063-015-9463-0

  11. Lari N, Abadeh M (2014) Training artificial neural network by krill-herd algorithm. In: IEEE 7th Joint Information Technology and Artificial Intelligence Conference (ITAIC), 2014 International. doi:10.1109/ITAIC.2014.7065006, pp 63–67

  12. Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  13. Little MA, McSharry PE, Roberts SJ, Costello DA, Moroz IM, et al. (2007) Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed Eng OnLine 6(1):23

    Article  Google Scholar 

  14. Liu Z, Liu A, Wang C, Niu Z (2004) Evolving neural network using real coded genetic algorithm (ga) for multispectral image classification. Futur Gener Comput Syst 20(7):1119–1129

    Article  Google Scholar 

  15. Mangasarian OL, Setiono R, Wolberg W (1990) Pattern recognition via linear programming: Theory and application to medical diagnosis. Large-scale numerical optimization, pp 22–31

  16. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 International Joint Conference on Neural Networks, 2002. IJCNN ’02, vol 2, pp 1895–1899. doi:10.1109/IJCNN.2002.1007808

  17. Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161. doi:10.1007/s10489-014-0645-7

  18. Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11,125–11,137

    MathSciNet  MATH  Google Scholar 

  19. Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:00747. doi:10.1016/j.ins.2014.01.038. http://www.sciencedirect.com/science/article/pii/S00200255140

  20. Mirjalili S, Mirjalili S, Hatamlou A (2015) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications:1–19. doi:10.1007/s00521-015-1870-7

  21. Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’89, pp 762–767, http://dl.acm.org/citation.cfm?id=1623755.1623876

  22. Nayak J, Naik B, Behera H (2015) A novel nature inspired firefly algorithm with higher order neural network: Performance analysis. Engineering Science and Technology, an International Journal doi:10.1016/j.jestch.2015.07.005

  23. Rao M (2000) Feedforward neural network methodology. Technometrics 42(4):432–433

    Article  Google Scholar 

  24. Rumelhart DE, Hinton GE, Williams RJ (1988) Neurocomputing: Foundations of research. MIT Press, Cambridge, MA, USA, chap Learning Representations by Back-propagating Errors, pp 696–699, http://dl.acm.org/citation.cfm?id=65669.104451

  25. Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the European Symposium on Artificial Neural Networks, Bruges, Bélgica

  26. Sexton RS, Gupta JN (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1–4):00682. doi:10.1016/S0020-0255(00)00068-2. http://www.sciencedirect.com/science/article/pii/S00200255000

    MATH  Google Scholar 

  27. Sexton RS, Dorsey RE, Johnson JD (1998) Toward global optimization of neural networks: a comparison of the genetic algorithm and backpropagation. Decis Support Syst 22(2):00407. doi:10.1016/S0167-9236(97)00040-7. http://www.sciencedirect.com/science/article/pii/S01679236970

    Article  Google Scholar 

  28. Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: Conference on Human System Interactions, 2008, IEEE, pp 60–65

  29. Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for feedforward neural network training. Int J Artif Intell Appl 2(3):36–43

    Google Scholar 

  30. Wdaa ASI (2008) Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia

  31. Wolberg WH, Mangasarian OL (1990) Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceed Nat Acad Sci 87(23):9193–9196

    Article  MATH  Google Scholar 

  32. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1 (1):67–82

    Article  Google Scholar 

  33. Wu WL, Su FC, Cheng YM, Chou YL (2001) Potential of the genetic algorithm neural network in the assessment of gait patterns in ankle arthrodesis. Ann Biomed Eng 29(1):83–91

    Article  Google Scholar 

  34. Yeh IC, Yang KJ, Ting TM (2009) Knowledge discovery on rfm model using bernoulli sequence. Expert Syst Appl 36(3):5866–5871

    Article  Google Scholar 

  35. Yu J, Wang S, Xi L (2008) Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4–6):1054 – 1060. doi:10.1016/j.neucom.2007.10.013. http://www.sciencedirect.com/science/article/pii/S0925231207003591, neural Networks: Algorithms and Applications50 Years of Artificial Intelligence: a Neuronal Approach4th International Symposium on Neural NetworksCampus Multidisciplinary in Perception and Intelligence

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hossam Faris.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Faris, H., Aljarah, I. & Mirjalili, S. Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45, 322–332 (2016). https://doi.org/10.1007/s10489-016-0767-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-016-0767-1

Keywords

Navigation