Skip to main content

A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks

  • Conference paper
  • First Online:
  • 1188 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12104))

Abstract

In recent years neuroevolution has become a dynamic and rapidly growing research field. Interest in this discipline is motivated by the need to create ad-hoc networks, the topology and parameters of which are optimized, according to the particular problem at hand. Although neuroevolution-based techniques can contribute fundamentally to improving the performance of artificial neural networks (ANNs), they present a drawback, related to the massive amount of computational resources needed. This paper proposes a novel population-based framework, aimed at finding the optimal set of synaptic weights for ANNs. The proposed method partitions the weights of a given network and, using an optimization heuristic, trains one layer at each step while “freezing” the remaining weights. In the experimental study, particle swarm optimization (PSO) was used as the underlying optimizer within the framework and its performance was compared against the standard training (i.e., training that considers the whole set of weights) of the network with PSO and the backward propagation of the errors (backpropagation). Results show that the subsequent training of sub-spaces reduces training time, achieves better generalizability, and leads to the exhibition of smaller variance in the architectural aspects of the network.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Van den Bergh, F., Engelbrecht, A.P.: Cooperative learning in neural networks using particle swarm optimizers. South Afr. Comput. J. 2000(26), 84–90 (2000)

    Google Scholar 

  2. Charytanowicz, M., Niewczas, J., Kulczycki, P., Kowalski, P.A., Łukasik, S., Żak, S.: Complete gradient clustering algorithm for features analysis of x-ray images. In: Piȩtka, E., Kawa, J. (eds.) Information Technologies in Biomedicine. AINSC, vol. 69, pp. 15–24. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13105-9_2

    Chapter  Google Scholar 

  3. De Falco, I., Cioppa, A.D., Natale, P., Tarantino, E.: Artificial neural networks optimization by means of evolutionary algorithms. In: Chawdhry, P.K., Roy, R., Pant, R.K. (eds.) Soft Computing in Engineering Design and Manufacturing, pp. 3–12. Springer, London (1998). https://doi.org/10.1007/978-1-4471-0427-8_1

    Chapter  Google Scholar 

  4. Derrac, J., García, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 1, 3–18 (2011)

    Article  Google Scholar 

  5. Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  6. Eberhart, Shi, Y.: Particle swarm optimization: developments, applications and resources. In: Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), vol. 1, pp. 81–86, May 2001

    Google Scholar 

  7. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS 1995, pp. 39–43, October 1995. https://doi.org/10.1109/MHS.1995.494215

  8. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics 7(7), 179–188 (1936)

    Article  Google Scholar 

  9. Garro, B.A., Vázquez, R.A.: Designing artificial neural networks using particle swarm optimization algorithms. Comput. Intell. Neurosci. 2015, 61 (2015)

    Article  Google Scholar 

  10. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Saddle River (1994)

    MATH  Google Scholar 

  11. Hochreiter, S.: Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München 91(1) (1991)

    Google Scholar 

  12. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    Article  MathSciNet  Google Scholar 

  13. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)

    Article  Google Scholar 

  14. Irie, B., Miyake, S.: Capabilities of three-layered perceptrons. In: IEEE International Conference on Neural Networks, vol. 1, p. 218 (1988)

    Google Scholar 

  15. Kolbusz, J., Rozycki, P., Wilamowski, B.M.: The study of architecture MLP with linear neurons in order to eliminate the “vanishing gradient” problem. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI), vol. 10245, pp. 97–106. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59063-9_9

    Chapter  Google Scholar 

  16. Kiranyaz, S., Ince, T., Yildirim, A., Gabbouj, M.: Evolutionary artificial neural networks by multi-dimensional particle swarm optimization. Neural Netw. 22(10), 1448–1462 (2009)

    Article  Google Scholar 

  17. Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995)

    Article  MathSciNet  Google Scholar 

  18. Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: IJCAI, vol. 89, pp. 762–767 (1989)

    Google Scholar 

  19. Moreira, M., Fiesler, E.: Neural networks with adaptive learning rate and momentum terms. Idiap-RR Idiap-RR-04-1995, IDIAP, Martigny, Switzerland, October 1995

    Google Scholar 

  20. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  21. Rashid, T.: Make Your Own Neural Network. CreateSpace Independent Publishing Platform (2016)

    Google Scholar 

  22. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science (1985)

    Google Scholar 

  23. Ding, S., Li, H., Su, C., Yu, J., Jin, F.: Evolutionary artificial neural networks: a review. Artif. Intell. Rev. 39, 251–260 (2013)

    Article  Google Scholar 

  24. Samarasinghe, S.: Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition. Auerbach Publications (2016)

    Google Scholar 

  25. Settles, M., Rylander, B.: Neural network learning using particle swarm optimization. In: Advances in Information Science and Soft Computing, pp. 224–226 (2002)

    Google Scholar 

  26. Shaw, D., Kinsner, W.: Chaotic simulated annealing in multilayer feedforward networks. In: Canadian Conference on Electrical and Computer Engineering, vol. 1, pp. 265–269. IEEE (1996)

    Google Scholar 

  27. Sher, G.I.: Handbook of Neuroevolution Through Erlang. Springer, New York (2013). https://doi.org/10.1007/978-1-4614-4463-3

    Book  Google Scholar 

  28. Si, T., Hazra, S., Jana, N.D.: Artificial neural network training using differential evolutionary algorithm for classification. In: Satapathy, S.C., Avadhani, P.S., Abraham, A. (eds.) Proceedings of the International Conference on Information Systems Design and Intelligent Applications 2012 (INDIA). AINSC, vol. 132, pp. 769–778. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27443-5_88

    Chapter  Google Scholar 

  29. Street, W.N., Wolberg, W.H., Mangasarian, O.L.: Nuclear feature extraction for breast tumor diagnosis. Proc. Soc. Photo-Opt. Inst. Eng. 1993 (1999)

    Google Scholar 

  30. Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput. 14, 347–361 (1990)

    Article  Google Scholar 

  31. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  32. Zhang, C., Shao, H., Li, Y.: Particle swarm optimisation for evolving artificial neural network. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 4, pp. 2487–2490. IEEE (2000)

    Google Scholar 

  33. Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks: the state of the art. Int. J. Forecast. 14(1), 35–62 (1998)

    Article  Google Scholar 

  34. Zhang, J.R., Zhang, J., Lok, T.M., Lyu, M.R.: A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training. Appl. Math. Comput. 185(2), 1026–1037 (2007)

    MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by FCT, Portugal through funding of LASIGE Research Unit (UID/CEC/00408/2019), and projects PREDICT (PTDC/CCI-CIF/29877/2017), BINDER (PTDC/CCI-INF/29168/2017), GADgET (DS-AIPA/DS/0022/2018) and AICE (DSAIPA/DS/0113/2019) and by the financial support from the Slovenian Research Agency (research core funding No. P5-0410).

This work is the result of the collaboration between the University of Salerno and Nova IMS. The first two authors contributed equally to this work.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. Della Cioppa or L. Vanneschi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Custode, L.L., Tecce, C.L., Bakurov, I., Castelli, M., Cioppa, A.D., Vanneschi, L. (2020). A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks. In: Castillo, P.A., Jiménez Laredo, J.L., Fernández de Vega, F. (eds) Applications of Evolutionary Computation. EvoApplications 2020. Lecture Notes in Computer Science(), vol 12104. Springer, Cham. https://doi.org/10.1007/978-3-030-43722-0_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-43722-0_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-43721-3

  • Online ISBN: 978-3-030-43722-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics