Skip to main content

Fast Sensitivity-Based Training of BP-Networks

  • Conference paper
Book cover Artificial Neural Networks and Machine Learning – ICANN 2014 (ICANN 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8681))

Included in the following conference series:

Abstract

Sensitivity analysis became an acknowledged tool used to study the performance of artificial neural networks. Sensitivity analysis allows to assess the influence, e.g., of each neuron or weight on the final network output. In particular various feature selection and pruning strategies are based on this capability. In this paper, we will present a new approximative sensitivity-based training algorithm yielding robust neural networks with generalization capabilities comparable to its exact analytical counterpart, yet much faster.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Castillo, E., Guijarro-Berdiñas, B., Fontenla-Romero, O., Alonso-Betanzos, A.: A very fast learning method for neural networks based on sensitivity analysis. Journal of Machine Learning Research 7, 1159–1182 (2006)

    MATH  Google Scholar 

  2. Dias, F.M., Antunes, A.: Test error versus training error in artificial neural networks for systems affected by noise. Int. J. of Systems Engineering, Applications and Development 2(3), 83–90 (2008)

    Google Scholar 

  3. Møller, M.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533 (1993)

    Article  Google Scholar 

  4. Mrázová, I., Reitermanová, Z.: A new sensitivity-based pruning technique for feed-forward neural networks that improves generalization. In: IJCNN 2011, pp. 2143–2150. IEEE, New York (2011)

    Google Scholar 

  5. Mrázová, I., Reitermanová, Z.: Sensitivity-based SCG-training of BP-networks. Procedia Computer Science 6, 177–182 (2011)

    Article  Google Scholar 

  6. Mrázová, I., Wang, D.: Improved generalization of neural classifiers with enforced internal representation. Neurocomputing 70(16-18), 2940–2952 (2007)

    Article  Google Scholar 

  7. The World Bank Group: World development report 2007/2008. Washington (2008)

    Google Scholar 

  8. Yeh, I.C., Cheng, W.L.: First and second order sensitivity analysis of MLP. Neurocomputing 73, 2225–2233 (2010)

    Article  Google Scholar 

  9. Zhong, A., Zeng, X., Wu, S., Han, L.: Sensitivity-based adaptive learning rules for binary feedforward neural networks. IEEE Transactions on Neural Networks and Learning Systems 23(3), 480–491 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Mrázová, I., Petříčková, Z. (2014). Fast Sensitivity-Based Training of BP-Networks. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11179-7_64

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11178-0

  • Online ISBN: 978-3-319-11179-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics