Skip to main content
Log in

A new scheme for incremental learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We present a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. First experimental results point out the efficacy of this new incremental scheme. Current works deal with theoretical analysis and practical refinements of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. D.R. Reilly, L.N. Cooper, C. Elbaum. A neural model for category learning,Biological Cybernetics vol. 45, pp. 35–41, 1982.

    Article  Google Scholar 

  2. E. Alpaydin. Grow and Learn: an incremental method for category learning,Proc. Int. Neural Network Conf. (Paris) vol. II, pp. 761–764, 1990.

    Google Scholar 

  3. M. Mézard, J.P Nadal. Learning in feedforward layered networks: the tiling algorithm,J. Phys. A: Maths. Gen. vol. 22, pp. 2191–2203, 1989.

    ADS  Google Scholar 

  4. M. Frean. The Upstart algorithm: a method for constructing and Training feedforward neural networks,Neural computation vol. 2, pp. 198–209, 1990.

    Google Scholar 

  5. Y. Le Cun, J. Denker, S. Solla. Optimal Brain Damage,Advances in Neural Information Processing vol. 2, pp. 598–605, 1990.

    Google Scholar 

  6. M. Cottrell, B. Girard, Y. Girard, M. Mangeas. Time series and neural network: a statistical method for weight elimination,1st European Symposium on Artificial Neural Networks (Brussels), pp. 157–164, 1993.

  7. M. Cottrell, B. Girard, Y. Girard, M. Mangeas, C. Muller. Neural modeling for time series: a statistical stepwise method for weight elimination. To appear inIEEE Trans. on Neural Networks.

  8. R. Reed. Pruning algorithm — a survey,IEEE Trans. on Neural Networks vol. 4, nℴ5, pp. 740–747.

  9. C. Louis, B. Gittler, F. Moutarde. Un algorithme de dimensionnement pour atteindre un réseau d'architecture minimale,Neural Networks and their applications (Marseille), December 15–16, 1994.

  10. O. Fambon, C. Jutten. A comparison of two weight pruning methods,2nd European Symposium on Artificial Neural Networks (Brussels), pp. 147–152, 1994.

  11. T. Ameniya.Advanced Econometrics, Basil Blackwell, 1986.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jutten, C., Chentouf, R. A new scheme for incremental learning. Neural Process Lett 2, 1–4 (1995). https://doi.org/10.1007/BF02312374

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02312374

Keywords

Navigation