Skip to main content

An Incremental Learning Method for Neural Networks Based on Sensitivity Analysis

  • Conference paper
Current Topics in Artificial Intelligence (CAEPIA 2009)

Abstract

The Sensitivity-Based Linear Learning Method (SBLLM) is a learning method for two-layer feedforward neural networks based on sensitivity analysis that calculates the weights by solving a linear system of equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method as compared to other batch learning algorithms. The SBLLM works in batch mode; however, there exist several reasons that justify the need for an on-line version of this algorithm. Among them, it can be mentioned the need for real time learning for many environments in which the information is not available at the outset but rather, is continually acquired, or in those situations in which large databases have to be managed but the computing resources are limited. In this paper an incremental version of the SBLLM is presented. The theoretical basis for the method is given and its performance is illustrated by comparing the results obtained by the on-line and batch mode versions of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rumelhart, D.E., Hinton, G.E., Willian, R.J.: Learning Representations of Back-Propagation Errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  2. Castillo, E., Guijarro-Berdiñas, B., Fontenla-Romero, O., Alonso-Betanzos, A.: A very fast learning method for Neural Networks Based on Sensitivity Analysis. Journal of Machine Learning Research 7, 1159–1182 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., Guijarro-Berdiñas, B.: A Global Optimum Approach for One-Layer Neural Networks. Neural Computation 14(6), 1429–1449 (2002)

    Article  MATH  Google Scholar 

  4. Erdogmus, D., Fontenla-Romero, O., Principe, J.C., Alonso-Betanzos, A., Castillo, E.: Linear-Least-Squares Initialization of Multilayer Perceptrons Through Backpropagation of the Desired Response. IEEE Transactions on Neural Network 16(2), 325–337 (2005)

    Article  Google Scholar 

  5. Fontenla-Romero, O., Erdogmus, D., Principe, J.C., Alonso-Betanzos, A., Castillo, E.: Linear least-squares based methods for neural networks learning. In: Kaynak, O., Alpaydın, E., Oja, E., Xu, L. (eds.) ICANN 2003 and ICONIP 2003. LNCS, vol. 2714, pp. 84–91. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  6. Bottou, L.: Stochastic Learning. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) Machine Learning 2003. LNCS (LNAI), vol. 3176, pp. 146–168. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  7. Moller, M.: Supervised learning on large redundant training sets. Neural Networks for Signal Processing II, 79–89 (1992)

    Google Scholar 

  8. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, p. 9. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  9. Collobert, R., Bengio, Y., Bengio, S.: Scaling large learning problems with hard parallel mixtures. International Journal of Pattern Recognition and Artificial Intelligence 17(3), 349–365 (2003)

    Article  MATH  Google Scholar 

  10. Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  11. Mangasarian, O.L., Ramakrishnan, R.: Data Mining Intitute, Computer Sciences Department, University of Wisconsin (1999), http://www.cs.wisc.edu/dmi

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pérez-Sánchez, B., Fontenla-Romero, O., Guijarro-Berdiñas, B. (2010). An Incremental Learning Method for Neural Networks Based on Sensitivity Analysis. In: Meseguer, P., Mandow, L., Gasca, R.M. (eds) Current Topics in Artificial Intelligence. CAEPIA 2009. Lecture Notes in Computer Science(), vol 5988. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14264-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14264-2_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14263-5

  • Online ISBN: 978-3-642-14264-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics