Skip to main content

Dynamic learning — An approach to forgetting in ART2 neural networks

  • Conference paper
  • First Online:
Artificial Intelligence: Methodology, Systems, and Applications (AIMSA 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1480))

  • 107 Accesses

Abstract

In machine learning “forgetting” little used or redundant information can be seen as a sensible strategy directed at the overall management of specific and limited computational resources. This paper describes new learning rules for the ART2 neural network model of category learning that facilitates forgetting without additional node features or subsystems and which preserves the main characteristics of the classic ART2 model. We consider that this approach is straightforward and is arguably biological plausible. The new learning rules drop the specification within the classic ART2 model that learning should only occur at the winning node. Classic ART2 learning rules are presented as a particular case of these new rules. The model increases system adaptability to continually changing or complex input domains. This allows the system to maintain information in a manner which is consistent with its use and allows system resources to be dynamically allocated in a way that is consistent with observations made of biological learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baddley, A.: The Psychology of Memory. New York Basis Books (1976)

    Google Scholar 

  2. Carpenter G., Grossberg S.: ART2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns. Applied Optics 26 (1987)4916–4930

    Google Scholar 

  3. Corsini R.: Encyclopaedia of Psychology. John Wiley & Sons, vol. 2 (1984)

    Google Scholar 

  4. Ebbinghaus H.: Memory: A Contribution to Experimental Psychology. New York Dover (1964)

    Google Scholar 

  5. Fritzke B.: Unsupervised Clustering with Growing Cell Structures. Proc. of the IJCNN'91 Seattle (IEEE) (1991)

    Google Scholar 

  6. Fritzke B.: Let It Grow — Self-Organizing Feature Map with Problem Dependent Cell Structure. Proc. of the ICANN'91 Helsinki (1991)

    Google Scholar 

  7. Grossberg S.: Adaptive Pattern Classification and Universal Recoding. II: Feedback, Expectation, Olfaction and Illusion. Biol. Cybern. 23 (1976) 187

    Article  MATH  MathSciNet  Google Scholar 

  8. Grossberg S.: How Does a Brain Build a Cognitive Code? Psychological Review. 1 (1980) 1–51

    Article  Google Scholar 

  9. Hebb D.: The Organization and Behaviour. New York: Witey (1949)50

    Google Scholar 

  10. Keuchel H.: Putcamer E., Zimmer U.: Learning and Forgetting Surface Classification with Dynamic Neural Networks. Proc. of the ICANN'93. Amsterdam IX (1993)

    Google Scholar 

  11. Kohonen T.: Statistical Pattern Recognition Revisited. Advanced Network Computers R. Eckmiller (ed.) (1990)

    Google Scholar 

  12. Kohonen T.: Self-Organization and Associative Memory. Springer Verlag Berlin (1984)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Fausto Giunchiglia

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nachev, A., Griffith, N., Gerov, A. (1998). Dynamic learning — An approach to forgetting in ART2 neural networks. In: Giunchiglia, F. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 1998. Lecture Notes in Computer Science, vol 1480. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0057458

Download citation

  • DOI: https://doi.org/10.1007/BFb0057458

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64993-9

  • Online ISBN: 978-3-540-49793-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics