Skip to main content

Incremental learning with a stopping criterion experimental results

  • Learning
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Abstract

We recently proposed a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit (or small 2- or 3-neuron networks) which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. In this paper, we develop experimental comparison between few alternatives of the incremental algorithm and classic backpropagation algorithm, according to convergence, speed of convergence and optimal number of neurons. Experimental results point out the efficacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D.R. Reilly, L.N. Cooper and C. Erlbaum. A Neural Model for Category Learning, Biological Cybernetics vol. 45, pp. 35–41, 1982.

    PubMed  Google Scholar 

  2. E. Alpaydin. Grow and Learn: An Incremental Method for Category Learning, Proc. Int. Neural Network Conf. (Paris) vol. II, pp. 761–764, 1990.

    Google Scholar 

  3. M. Mézard and J.P Nadal. Learning in Feedforward Layered Networks: the Tiling Algorithm, J. Phys. A: Maths. Gen. vol. 22, pp. 2191–2203, 1989.

    Google Scholar 

  4. M. Frean. The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks, Neural computation vol. 2, pp. 198–209, 1990.

    Google Scholar 

  5. Y. Le Cnn, J. Denker and S. Solla. Optimal Brain Damage, Advances in Neural Information Processing vol. 2, pp. 598–605, 1990.

    Google Scholar 

  6. M. Cottrell, B. Girard, Y. Girard and M. Mangeas. Time Series And Neural Network: A Statistical Method For Weight Elimination, 1st European Symposium on Artificial Neural Networks (Brussels), pp. 157–164, 1993.

    Google Scholar 

  7. M. Cottrell, B. Girard, Y. Girard, M. Mangeas, C. Muller. Neural Modeling For Time Series: A Statistical Stepwise Method For Weight Elimination. To appear in IEEE Trans. on Neural Networks.

    Google Scholar 

  8. R. Reed. Pruning Algorithm — A Survey, IEEE Trans. on Neural Networks vol. 4, n°5, pp. 740–747.

    Google Scholar 

  9. J. Sietsma, R.J.F. Dow. Creating artificial networks that generalize. Neural Networks, vol. 4, n°l, pp. 67–79, 1991.

    Google Scholar 

  10. C. Louis, B. Gittler and F. Moutarde. Un algorithme de dimensionnement pour atteindre un réseau d'architecture minimale, Neural Networks and their applications (Marseille), December 15–16, 1994.

    Google Scholar 

  11. O. Fambon and C. Jutten. A Comparison of Two Weight Pruning Methods, European Symposium on Artificial Neural Networks, (Brussels), pp. 147–152, 1994.

    Google Scholar 

  12. C. Jutten. and R. Chentouf. A New Scheme for Incremental Learning, Accepted for Neural Processing Letters (Brussels), 1995.

    Google Scholar 

  13. T. Ameniya. Advanced Econometrics. Basil Blackwell, 1986.

    Google Scholar 

  14. A. Benveniste, M. Métivier, P. Priouret. Adaptive Algorithms and Stochastic Approximations. Springer-Verlag, Berlin, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chentouf, R., Jutten, C. (1995). Incremental learning with a stopping criterion experimental results. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_218

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_218

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics