Skip to main content

Advertisement

Log in

An efficient hybrid multilayer perceptron neural network with grasshopper optimization

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

This paper proposes a new hybrid stochastic training algorithm using the recently proposed grasshopper optimization algorithm (GOA) for multilayer perceptrons (MLPs) neural networks. The GOA algorithm is an emerging technique with a high potential in tackling optimization problems based on its flexible and adaptive searching mechanisms. It can demonstrate a satisfactory performance by escaping from local optima and balancing the exploration and exploitation trends. The proposed GOAMLP model is then applied to five important datasets: breast cancer, parkinson, diabetes, coronary heart disease, and orthopedic patients. The results are deeply validated in comparison with eight recent and well-regarded algorithms qualitatively and quantitatively. It is shown and proved that the proposed stochastic training algorithm GOAMLP is substantially beneficial in improving the classification rate of MLPs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Alba E, Chicano J (2004) Training neural networks with GA hybrid algorithms. In: Genetic and evolutionary computation—GECCO 2004. Springer, pp 852–863

  • Aljarah I, Faris H, Mirjalili S (2016) Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput 22:1–15

    Article  Google Scholar 

  • Aljarah I, Al-Zoubi AM, Faris H, Hassonah MA, Mirjalili S, Saadeh H (2018a) Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm. Cognit Comput 10:1–18

    Article  Google Scholar 

  • Aljarah I, Faris H, Mirjalili S, Al-Madi N (2018b) Training radial basis function networks using biogeography-based optimizer. Neural Comput Appl 29(7):529–553

    Article  Google Scholar 

  • Almonacid F, Fernandez EF, Mellit A, Kalogirou S (2017) Review of techniques based on artificial neural networks for the electrical characterization of concentrator photovoltaic technology. Renew Sustain Energy Rev 75:938–953

    Article  Google Scholar 

  • Ata R (2015) Artificial neural networks applications in wind energy systems: a review. Renew Sustain Energy Rev 49:534–562

    Article  Google Scholar 

  • Blum C, Socha K (2005) Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: Fifth international conference on hybrid intelligent systems, 2005. HIS’05. IEEE, p 6

  • Braik M, Sheta A, Arieqat A (2008) A comparison between GAs and PSO in training ANN to model the TE chemical process reactor. In: AISB 2008 convention communication, interaction and social intelligence, vol 1, p 24

  • Chan KY, Ling S-H, Dillon TS, Nguyen HT (2011) Diagnosis of hypoglycemic episodes using a neural network based rule discovery system. Expert Syst Appl 38(8):9799–9808

    Article  Google Scholar 

  • Chaudhuri BB, Bhattacharya U (2000) Efficient training and improved performance of multilayer perceptron in pattern classification. Neurocomputing 34(1):11–27

    Article  MATH  Google Scholar 

  • Chen J-F, Do QH, Hsieh H-N (2015) Training artificial neural networks by a hybrid PSO-CS algorithm. Algorithms 8(2):292–308

    Article  MathSciNet  MATH  Google Scholar 

  • Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst (MCSS) 2(4):303–314

    Article  MathSciNet  MATH  Google Scholar 

  • Ding S, Li H, Su C, Yu J, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260

    Article  Google Scholar 

  • Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39

    Article  Google Scholar 

  • Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118

    Article  Google Scholar 

  • Faris H, Aljarah I, Al-Madi N, Mirjalili S (2016a) Optimizing the learning process of feedforward neural networks using lightning search algorithm. Int J Artif Intell Tools 25(06):1650033

    Article  Google Scholar 

  • Faris H, Aljarah I, Mirjalili S (2016b) Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl Intell 45(2):322–332

    Article  Google Scholar 

  • Faris H, Mafarja MM, Heidari AA, Aljarah I, Al-Zoubi AM, Mirjalili S, Fujita H (2018a) An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl Based Syst 154:43–67

    Article  Google Scholar 

  • Faris H, Aljarah I, Mirjalili S (2018b) Improved monarch butterfly optimization for unconstrained global search and neural network training. Appl Intell 48(2):445–464

    Article  Google Scholar 

  • Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning, 1st edn. Addison-Wesley Longman Publishing Co., Inc, Boston

    MATH  Google Scholar 

  • Gupta JND, Sexton RS (1999) Comparing backpropagation with a genetic algorithm for neural network training. Omega 27(6):679–684

    Article  Google Scholar 

  • Hamidzadeh J, Moradi M (2018) Improved one-class classification using filled function. Appl Intell 1–17

  • Hamidzadeh J, Namaei N (2018) Belief-based chaotic algorithm for support vector data description. Soft Comput 1–26

  • Hamidzadeh J, Monsefi R, Yazdi HS (2012) DDC: distance-based decision classifier. Neural Comput Appl 21(7):1697–1707

    Article  Google Scholar 

  • Hamidzadeh J, Monsefi R, Yazdi HS (2014) LMIRA: large margin instance reduction algorithm. Neurocomputing 145:477–487

    Article  Google Scholar 

  • Hamidzadeh J, Monsefi R, Yazdi HS (2015) IRAHC: instance reduction algorithm using hyperrectangle clustering. Pattern Recognit 48(5):1878–1889

    Article  MATH  Google Scholar 

  • Hamidzadeh J, Monsefi R, Yazdi HS (2016) Large symmetric margin instance selection algorithm. Int J Mach Learn Cybern 7(1):25–45

    Article  Google Scholar 

  • Hamidzadeh J, Sadeghi R, Namaei N (2017) Weighted support vector data description based on chaotic bat algorithm. Appl Soft Comput 60:540–551

    Article  Google Scholar 

  • Hamidzadeh J, Zabihimayvan M, Sadeghi R (2018) Detection of web site visitors based on fuzzy rough sets. Soft Comput 22(7):2175–2188

    Article  Google Scholar 

  • Hansen N, Müller SD, Koumoutsakos P (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol Comput 11(1):1–18

    Article  Google Scholar 

  • Heidari AA, Abbaspour RA (2018) Enhanced chaotic grey wolf optimizer for real-world optimization problems: a comparative study. In: Handbook of research on emergent applications of optimization algorithms. IGI Global, pp 693–727

  • Heidari AA, Delavar MR (2016) A modified genetic algorithm for finding fuzzy shortest paths in uncertain networks. ISPRS Int Arch Photogramm Remote Sens Spat Inf Sci XLI–B2:299–304

    Article  Google Scholar 

  • Heidari AA, Pahlavani P (2017) An efficient modified grey wolf optimizer with lévy flight for optimization tasks. Appl Soft Comput 60:115–134

    Article  Google Scholar 

  • Heidari AA, Abbaspour RA, Jordehi AR (2017) An efficient chaotic water cycle algorithm for optimization tasks. Neural Comput Appl 28(1):57–85

    Article  Google Scholar 

  • Heidari AA, Abbaspour RA, Jordehi AR (2017) Gaussian bare-bones water cycle algorithm for optimal reactive power dispatch in electrical power systems. Appl Soft Comput 57:657–671

    Article  Google Scholar 

  • Ilonen J, Kamarainen J-K, Lampinen J (2003) Differential evolution training algorithm for feed-forward neural networks. Neural Process Lett 17(1):93–105

    Article  Google Scholar 

  • Islam MM, Yao X, Murase K (2003) A constructive algorithm for training cooperative neural network ensembles. IEEE Trans Neural Netw 14(4):820–834

    Article  Google Scholar 

  • Jianbo Y, Wang S, Xi L (2008) Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 71(4):1054–1060

    Google Scholar 

  • Jordehi AR, Jasni J (2013) Parameter selection in particle swarm optimisation: a survey. J Exp Theor Artif Intell 25(4):527–542

    Article  Google Scholar 

  • Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39(3):459–471

    Article  MathSciNet  MATH  Google Scholar 

  • Karaboga D, Akay B, Ozturk C (2007) Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In: International conference on modeling decisions for artificial intelligence. Springer, pp 318–329

  • Krogh A (2008) What are artificial neural networks? Nat Biotechnol 26(2):195–197

    Article  Google Scholar 

  • Lee S, Choeh JY (2014) Predicting the helpfulness of online reviews using multilayer perceptron neural networks. Expert Syst Appl 41(6):3041–3046

    Article  Google Scholar 

  • Little MA, McSharry PE, Roberts SJ, Costello DAE, Moroz IM et al (2007) Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. Biomed Eng OnLine 6(1):23

    Article  Google Scholar 

  • Mafarja M, Aljarah I, Heidari AA, Hammouri AI, Faris H, Al-Zoubi AM, Mirjalili S (2018) Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowl Based Syst 145:25–45

    Article  Google Scholar 

  • Mallipeddi R, Suganthan PN, Pan Q-K, Tasgetiren MF (2011) Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl Soft Comput 11(2):1679–1696

    Article  Google Scholar 

  • Mangasarian OL, Setiono R, Wolberg WH (1990) Pattern recognition via linear programming: theory and application to medical diagnosis. Large Scale Numer Optim 22–31

  • McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Article  MathSciNet  MATH  Google Scholar 

  • Mirjalili S (2015) How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl Intell 43(1):150–161

    Article  Google Scholar 

  • Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133

    Article  Google Scholar 

  • Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67

    Article  Google Scholar 

  • Mirjalili S, Mirjalili SM, Lewis A (2014) Let a biogeography-based optimizer train your multi-layer perceptron. Inf Sci 269:188–209

    Article  MathSciNet  Google Scholar 

  • Mirjalili SZ, Saremi S, Mirjalili SM (2015) Designing evolutionary feedforward neural networks using social spider optimization algorithm. Neural Comput Appl 26(8):1919–1928

    Article  Google Scholar 

  • Moghaddam VH, Hamidzadeh J (2016) New hermite orthogonal polynomial kernel and combined kernels in support vector machine classifier. Pattern Recognit 60:921–935

    Article  MATH  Google Scholar 

  • Ojha VK, Abraham A, Snášel V (2017) Metaheuristic design of feedforward neural networks: a review of two decades of research. Eng Appl Artif Intell 60:97–116

    Article  Google Scholar 

  • Sadeghi R, Hamidzadeh J (2018) Automatic support vector data description. Soft Comput 22(1):147–158

    Article  Google Scholar 

  • Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47

    Article  Google Scholar 

  • Seiffert U (2001) Multiple layer perceptron training using genetic algorithms. In: Proceedings of the European symposium on artificial neural networks ESANN, Bruges, Blgica

  • Sexton RS, Gupta JND (2000) Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inf Sci 129(1):45–59

    Article  MATH  Google Scholar 

  • Sexton RS, Dorsey RE, Johnson JD (1999) Optimization of neural networks: a comparative analysis of the genetic algorithm and simulated annealing. Eur J Oper Res 114(3):589–601

    Article  MATH  Google Scholar 

  • Shanker MS (1996) Using neural networks to predict the onset of diabetes mellitus. J Chem Inf Comput Sci 36(1):35–41

    Article  Google Scholar 

  • Siddique MNH, Tokhi MO (2001) Training neural networks: backpropagation vs. genetic algorithms. In: International joint conference on neural networks, 2001. Proceedings. IJCNN’01, vol 4. IEEE, pp 2673–2678

  • Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713

    Article  Google Scholar 

  • Slowik A, Bialko M (2008) Training of artificial neural networks using differential evolution algorithm. In: 2008 Conference on human system interactions. IEEE, pp 60–65

  • Socha K, Blum C (2007) An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput Appl 16(3):235–247

    Article  Google Scholar 

  • Trujillo MCR, Alarcón TE, Dalmau OS, Ojeda AZ (2017) Segmentation of carbon nanotube images through an artificial neural network. Soft Comput 21(3):611–625

    Article  Google Scholar 

  • Wang G-G, Deb S, Cui Z (2015) Monarch butterfly optimization. Neural Comput Appl 1–20

  • Wang L, Zeng Y, Chen T (2015) Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst Appl 42(2):855–863

    Article  Google Scholar 

  • Wdaa ASI (2008) Differential evolution for neural networks learning enhancement. PhD thesis, Universiti Teknologi Malaysia

  • Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput 14(3):347–361

    Article  Google Scholar 

  • Wienholt W (1993) Minimizing the system error in feedforward neural networks with evolution strategy. In: ICANN93. Springer, pp 490–493

  • Wolberg WH, Mangasarian OL (1990) Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc Natl Acad Sci 87(23):9193–9196

    Article  MATH  Google Scholar 

  • Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

    Article  Google Scholar 

  • Yang X-S, Deb S, Fong S (2011) Accelerated particle swarm optimization and support vector machine for business optimization and applications. In: International conference on networked digital technologies. Springer, pp 53–66

  • Yang X-S (2010) Firefly algorithm, stochastic test functions and design optimisation. Int J Bio Inspired Comput 2(2):78–84

    Article  Google Scholar 

  • Yang X-S, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput 29(5):464–483

    Article  Google Scholar 

  • Yang X-S, Karamanoglu M, He X (2014) Flower pollination algorithm: a novel approach for multiobjective optimization. Eng Optim 46(9):1222–1237

    Article  MathSciNet  Google Scholar 

  • Yao X, Liu Y (1999) Neural networks for breast cancer diagnosis. In: Proceedings of the 1999 congress on evolutionary computation, 1999. CEC 99, vol 3. IEEE, pp 1760–1767

  • Yi-Chung H (2014) Nonadditive similarity-based single-layer perceptron for multi-criteria collaborative filtering. Neurocomputing 129:306–314

    Article  Google Scholar 

  • Zhang J-R, Zhang J, Lok T-M, Lyu MR (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Appl Math Comput 185(2):1026–1037

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ibrahim Aljarah.

Ethics declarations

Conflict of interest

All authors declare that there is no conflict of interest.

Ethical standard

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heidari, A.A., Faris, H., Aljarah, I. et al. An efficient hybrid multilayer perceptron neural network with grasshopper optimization. Soft Comput 23, 7941–7958 (2019). https://doi.org/10.1007/s00500-018-3424-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-018-3424-2

Keywords

Navigation