Abstract
Artificial neural network (ANN) is one of the most successful tools in machine learning. The success of ANN mostly depends on its architecture and learning procedure. Multi-layer perceptron (MLP) is a popular form of ANN. Moreover, backpropagation is a well-known gradient-based approach for training MLP. Gradient-based search approaches have a low convergence rate; therefore, they may get stuck in local minima, which may lead to performance degradation. Training the MLP is accomplished based on minimizing the total network error, which can be considered as an optimization problem. Stochastic optimization algorithms are proven to be effective when dealing with such problems. Battle royale optimization (BRO) is a recently proposed population-based metaheuristic algorithm which can be applied to single-objective optimization over continuous problem spaces. The proposed method has been compared with backpropagation (Generalized learning delta rule) and six well-known optimization algorithms on ten classification benchmark datasets. Experiments confirm that, according to error rate, accuracy, and convergence, the proposed approach yields promising results and outperforms its competitors.






Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aljarah I, Faris H, Mirjalili S (2018) Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Computing 22(1):1–15
Andrew NG, Katanforoosh K, Mourri YB (2020) Neural networks and deep learning. McGraw Hill, New York
Askarzadeh A (2014) Bird mating optimizer: an optimization algorithm inspired by bird mating strategies. Commun Nonlinear Sci Numer Simul 19(4):1213–1228
Askarzadeh A, Rezazadeh A (2013) Artificial neural network training using a new efficient optimization algorithm. Applied Soft Computing 13(2):1206–1213
Bhattacharjee K, Pant M (2019) Hybrid particle swarm optimization-genetic algorithm trained multi-layer perceptron for classification of human glioma from molecular brain neoplasia data. Cogn Syst Res 58:173–194
Blum C and Socha K (2005) Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In Fifth International Conference on Hybrid Intelligent Systems (HIS'05), p 6.
Braik M, Sheta A, Arieqat A (2008) A comparison between GAs and PSO in training ANN to model the TE chemical process reactor. Proceedings of the AISB 2008 symposium on swarm intelligence algorithms and applications, vol 11, pp 24–30.
Chatterjee S, Sarkar S, Hore S, Dey N, Ashour AS, Balas VE (2017) Particle swarm optimization trained neural network for structural failure prediction of multistoried RC buildings. Neural Comput Appl 28(8):2005–2016
Dorigo M, Di Caro G (1999) Ant colony optimization: a new meta-heuristic. Proc Congr Evol Comput 2:1470–1477
Eberhart R and Kennedy J (1995) A new optimizer using particle swarm theory. MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39–43: IEEE.
Haykin S (2007) Neural networks: a comprehensive foundation. Prentice-Hall, Inc., Upper Saddle River
Hebb DO (1949) The organization of behavior: a neuropsychological theory. Wiley, New York
Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, London
Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17(1):93–105
Jaddi NS, Abdullah S, Hamdan AR (2015) Optimization of neural network model using modified bat-inspired algorithm. Appl Soft Comput 37:71–86
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471
Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. Springer, Berlin, pp 318–329
Linggard R, Myers D, Nightingale C (2012) Neural networks for vision, speech and natural language. Springer, Berlin
McClelland JL, Rumelhart DE, Hinton GE (1986) The appeal of parallel distributed processing. MIT Press, Cambridge, pp 3–44
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133
Mirjalili S (2015) How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014a) Grey wolf optimizer. Adv Eng Softw 69:46–61
Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inform Sci 269:188–209
Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. IJCAI 89:762–767
Nawi NM, Khan A, Rehman MZ (2013) A new back-propagation neural network optimized with cuckoo search algorithm. Springer, Berlin, pp 413–426
Nayak J, Naik B, Behera HS (2016) A novel nature inspired firefly algorithm with higher order neural network: performance analysis. Eng Sci Technol Int J 19(1):197–211
Ojha VK, Abraham A, Snášel V (2017) Metaheuristic design of feedforward neural networks: a review of two decades of research,". Engineering Applications of Artificial Intelligence 60:97–116
Ozturk C, Karaboga D (2011) Hybrid artificial bee colony algorithm for neural network training. IEEE Congr Evol Comput 2011:84–88
Price KV (1996) Differential evolution: a fast and simple numerical optimizer. In Proceedings of North American Fuzzy Information Processing, pp. 524–527: IEEE.
Rahkar Farshi T (2020) Battle royale optimization algorithm. Neural Comput Appl 33:1139–1157
Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: A Gravitational search algorithm. Inform Sci 179(13):2232–2248
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Schwefel H-P (1984) Evolution strategies: a family of non-linear optimization techniques based on imitating some principles of organic evolution. Ann Oper Res 1(2):165–167
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Wdaa ASI and Sttar A (2008) Differential evolution for neural networks learning enhancement. Universiti Teknologi Malaysia
Werbos P (1989) Back-propagation and neurocontrol: a review and prospectus. In: IEEE Proceedings of the International Joint Conference on Neural Networks (IJCNN'89), pp. 1, I209-I216.
Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. Springer, London, pp 490–493
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Yadav RK and Anubhav (2020) PSO-GA based hybrid with adam optimization for ANN training with application in medical diagnosis. Cogn Syst Res 64:191–199
Yang X-S (2009) Firefly algorithms for multimodal optimization. International symposium on stochastic algorithms. Springer, Berlin, pp 169–178
Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: González JR, Pelta DA, Cruz C, Terrazas G, Krasnogor N (eds) Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). Springer, Berlin, pp 65–74
Yang X-S and Deb S (2009) Cuckoo search via Lévy flights. In 2009 World congress on nature & biologically inspired computing (NaBIC), pp 210–214
Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl Math Comput 185(2):1026–1037
Zurada JM (1992) Introduction to artificial neural systems. West St. Paul, New York
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Authors declares that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Agahian, S., Akan, T. Battle royale optimizer for training multi-layer perceptron. Evolving Systems 13, 563–575 (2022). https://doi.org/10.1007/s12530-021-09401-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12530-021-09401-5