Skip to main content

Recruitment vs. Backpropagation Learning: An empirical study on re-learning in connectionist networks

  • Conference paper
Konnektionismus in Artificial Intelligence und Kognitionsforschung

Part of the book series: Informatik-Fachberichte ((INFORMATIK,volume 252))

Abstract

This paper describes a first comparison between two connectionist learning techniques: backpropagation and recruitment learning. The task is to re-learn a conceptual representation, i.e. to significantly change a representation in an additional training period by the use of new data. Backpropagation denotes to a widely known, supervised learning technique which requires the repeated presentation of a set of training instances. Recruitment learning denotes to a technique which converts network units from a pool of free units into units which carry meaningful information, and can be used for both, instruction-based and similarity-based learning. It will be shown that a learning technique which makes use of structured knowledge (i.e. recruitment learning), re-learns and modifies a connectionist representation faster than backpropagation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Diederich, J. (1989): Instruction and High-Level Learning in Connectionist Networks. Connection Science. Vol.1, No.2, 161–180.

    Article  Google Scholar 

  • Feldman, J.A. (1982): Dynamic Connections in Neural Networks. Biol. Cybernetics, 46, 27–39.

    Google Scholar 

  • Rumelhart, D.E., Hinton, G.E. & Williams, R.J. (1986): Learning Internal Representations by Error Propagation. In: Rumelhart, D.E. & McClelland, J.L. (Eds.): Parallel Distributed Processing. Vol 1.: Foundations. The MIT Press, Cambridge, Mass.

    Google Scholar 

  • Rumelhart, D.E. (1990): Brain Style Computation: Learning and Generalization. In: Zornetzer (Ed.): An Introduction to Neural and Electronic Networks, 405–420. Academic Press, N.Y.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Diederich, J. (1990). Recruitment vs. Backpropagation Learning: An empirical study on re-learning in connectionist networks. In: Dorffner, G. (eds) Konnektionismus in Artificial Intelligence und Kognitionsforschung. Informatik-Fachberichte, vol 252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76070-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-76070-9_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-53131-9

  • Online ISBN: 978-3-642-76070-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics