Abstract
This paper employs the recently proposed nature-inspired algorithm called Multi-Verse Optimizer (MVO) for training the Multi-layer Perceptron (MLP) neural network. The new training approach is benchmarked and evaluated using nine different bio-medical datasets selected from the UCI machine learning repository. The results are compared to five classical and recent evolutionary metaheuristic algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), FireFly (FF) Algorithm and Cuckoo Search (CS). In addition, the results are compared with two well-regarded conventional gradient-based training methods: the conventional Back-Propagation (BP) and the Levenberg-Marquardt (LM) algorithms. The comparative study demonstrates that MVO is very competitive and outperforms other training algorithms in the majority of datasets in terms of improved local optima avoidance and convergence speed.






Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Basheer I, Hajmeer M (2000) Artificial neural networks: fundamentals, computing, design, and application. J Microbiol Methods 43(1):3–31
Braik M, Sheta A, Arieqat A (2008) A comparison between gas and pso in training ann to model the te chemical process reactor. In: AISB 2008 Convention communication, interaction and social intelligence, vol 1. citeseer, p 24
Chaves AdCF, Vellasco MMB, Tanscheit R (2005) Fuzzy rule extraction from support vector machines. In: Hybrid Intelligent Systems, 2005. HIS’05. Fifth International Conference on. IEEE, pp 6–pp
Czerniak J, Zarzycki H (2003) Application of rough sets in the presumptive diagnosis of urinary system diseases. In: Artificial Intelligence and Security in Computing Systems. Springer, pp 41–51
Fausett LV (1994) Fundamentals of Neural Networks: Architectures, Algorithms, and Applications. Prentice Hall
Gupta JN, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6):679–684
Holland JH (1992) Adaptation in natural and artificial systems. MIT press cambridge, MA, USA
Hornik KJ, Stinchcombe D, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
Khan K, Sahai A (2012) A comparison of ba, ga, pso, bp and lm for training feed forward neural networks in e-learning context. Int J Intell Syst Appl (IJISA) 4(7):23
Kowalski P, Ukasik S (2015) Training neural networks with krill herd algorithm. Neural Processing Letters pp 1–13, doi:10.1007/s11063-015-9463-0
Lari N, Abadeh M (2014) Training artificial neural network by krill-herd algorithm. In: IEEE 7th Joint Information Technology and Artificial Intelligence Conference (ITAIC), 2014 International. doi:10.1109/ITAIC.2014.7065006, pp 63–67
Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml
Little MA, McSharry PE, Roberts SJ, Costello DA, Moroz IM, et al. (2007) Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed Eng OnLine 6(1):23
Liu Z, Liu A, Wang C, Niu Z (2004) Evolving neural network using real coded genetic algorithm (ga) for multispectral image classification. Futur Gener Comput Syst 20(7):1119–1129
Mangasarian OL, Setiono R, Wolberg W (1990) Pattern recognition via linear programming: Theory and application to medical diagnosis. Large-scale numerical optimization, pp 22–31
Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms for feedforward neural network training. In: Proceedings of the 2002 International Joint Conference on Neural Networks, 2002. IJCNN ’02, vol 2, pp 1895–1899. doi:10.1109/IJCNN.2002.1007808
Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161. doi:10.1007/s10489-014-0645-7
Mirjalili S, Hashim SZM, Sardroudi HM (2012) Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl Math Comput 218(22):11,125–11,137
Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:00747. doi:10.1016/j.ins.2014.01.038. http://www.sciencedirect.com/science/article/pii/S00200255140
Mirjalili S, Mirjalili S, Hatamlou A (2015) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications:1–19. doi:10.1007/s00521-015-1870-7
Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’89, pp 762–767, http://dl.acm.org/citation.cfm?id=1623755.1623876
Nayak J, Naik B, Behera H (2015) A novel nature inspired firefly algorithm with higher order neural network: Performance analysis. Engineering Science and Technology, an International Journal doi:10.1016/j.jestch.2015.07.005
Rao M (2000) Feedforward neural network methodology. Technometrics 42(4):432–433
Rumelhart DE, Hinton GE, Williams RJ (1988) Neurocomputing: Foundations of research. MIT Press, Cambridge, MA, USA, chap Learning Representations by Back-propagating Errors, pp 696–699, http://dl.acm.org/citation.cfm?id=65669.104451
Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the European Symposium on Artificial Neural Networks, Bruges, Bélgica
Sexton RS, Gupta JN (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1–4):00682. doi:10.1016/S0020-0255(00)00068-2. http://www.sciencedirect.com/science/article/pii/S00200255000
Sexton RS, Dorsey RE, Johnson JD (1998) Toward global optimization of neural networks: a comparison of the genetic algorithm and backpropagation. Decis Support Syst 22(2):00407. doi:10.1016/S0167-9236(97)00040-7. http://www.sciencedirect.com/science/article/pii/S01679236970
Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: Conference on Human System Interactions, 2008, IEEE, pp 60–65
Valian E, Mohanna S, Tavakoli S (2011) Improved cuckoo search algorithm for feedforward neural network training. Int J Artif Intell Appl 2(3):36–43
Wdaa ASI (2008) Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia
Wolberg WH, Mangasarian OL (1990) Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceed Nat Acad Sci 87(23):9193–9196
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1 (1):67–82
Wu WL, Su FC, Cheng YM, Chou YL (2001) Potential of the genetic algorithm neural network in the assessment of gait patterns in ankle arthrodesis. Ann Biomed Eng 29(1):83–91
Yeh IC, Yang KJ, Ting TM (2009) Knowledge discovery on rfm model using bernoulli sequence. Expert Syst Appl 36(3):5866–5871
Yu J, Wang S, Xi L (2008) Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4–6):1054 – 1060. doi:10.1016/j.neucom.2007.10.013. http://www.sciencedirect.com/science/article/pii/S0925231207003591, neural Networks: Algorithms and Applications50 Years of Artificial Intelligence: a Neuronal Approach4th International Symposium on Neural NetworksCampus Multidisciplinary in Perception and Intelligence
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Faris, H., Aljarah, I. & Mirjalili, S. Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45, 322–332 (2016). https://doi.org/10.1007/s10489-016-0767-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-016-0767-1