Skip to main content

The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2338))

Abstract

The task rehearsal method (TRM) is introduced as an approach to life-long learning that uses the representation of previously learned tasks as a source of inductive bias. This inductive bias enables TRM to generate more accurate hypotheses for new tasks that have small sets of training examples. TRM has a knowledge retention phase during which the neural network representation of a successfully learned task is stored in a domain knowledge database, and a knowledge recall and learning phase during which virtual examples of stored tasks are generated from the domain knowledge. The virtual examples are rehearsed as secondary tasks in parallel with the learning of a new (primary) task using the ŋMTL neural network algorithm, a variant of multiple task learning (MTL). The results of experiments on three domains show that TRM is effective in retaining task knowledge in a representational form and transferring that knowledge in the form of virtual examples. TRM with ŋMTL is shown to develop more accurate hypotheses for tasks that suffer from impoverished training sets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yaser S. Abu-Mostafa. Hints. Neural Computation, 7:639–671, 1995.

    Article  Google Scholar 

  2. Jonathan Baxter. Learning internal representations. Proceedings of the Eighth International Conference on Computational Learning Theory, 1995.

    Google Scholar 

  3. Richard A. Caruana. Multitask learning. Machine Learning, 28:41–75, 1997.

    Article  Google Scholar 

  4. Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: the sequential learning problem. The Psychology of Learning and Motivation, 24:109–165, 1989.

    Article  Google Scholar 

  5. Tom M. Mitchell. Machine Learning. McGraw Hill, New York, NY, 1997.

    MATH  Google Scholar 

  6. Anthony V. Robins. Catastrophic forgetting, rehearsal, and pseudorehearsal. Connection Science, 7:123–146, 1995.

    Article  Google Scholar 

  7. Daniel L. Silver. Selective Transfer of Neural Network Task Knowledge. PhD Thesis, Dept. of Computer Science, University of Western Ontario, London, Canada, June 2000.

    Google Scholar 

  8. Daniel L. Silver and Robert E. Mercer. The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness. Connection Science Special Issue: Transfer in Inductive Systems, 8(2):277–294, 1996.

    Google Scholar 

  9. Daniel L. Silver and Robert E. Mercer. Selective functional transfer: Inductive bias from related tasks. Proceedings of the International Conference on Artificial Intelligence and Soft Computing (ASC2001), pages 182–189, 2001.

    Google Scholar 

  10. Sebastian Thrun. Lifelong learning algorithms. Learning to Learn, pages 181–209, 1997.

    Google Scholar 

  11. Paul E. Utgoff. Machine Learning of Inductive Bias. Kluwer Academc Publisher, Boston, MA, 1986.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Silver, D.L., Mercer, R.E. (2002). The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data. In: Cohen, R., Spencer, B. (eds) Advances in Artificial Intelligence. Canadian AI 2002. Lecture Notes in Computer Science(), vol 2338. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47922-8_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-47922-8_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43724-6

  • Online ISBN: 978-3-540-47922-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics