Skip to main content

Yerkes-Dodson Law in Agents’ Training

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2902))

Abstract

Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves “a number of iterations is a function of stimulation” is caused by smoothly bounded nonlinearities of the perceptron’s activation function and a difference in desired outputs.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yerkes, R.M., Dodson, J.D.: The relation of strength of stimulus to rapidity of habit-formation. J. Comparative Neurology and Psychology 18, 459–482 (1908)

    Article  Google Scholar 

  2. Levitt, E.: The Psychology of Anxiety. Bobbs-Merrill, New York (1967)

    Google Scholar 

  3. Teigen, K.H.: Yerkes-Dodson – a law for all seasons. Theory and Psychology 4(4), 525–547 (1994)

    Article  Google Scholar 

  4. Belavkin, R.: The role of emotion in problem solving. In: AISB 2001 Symposium on Emotion, Cognition and Affective Computing, Heslington, York, UK, pp. 49–57 (2001)

    Google Scholar 

  5. French, V.A., Anderson, E., Putman, G., Alvager, T.: The Yerkes-Dodson law simulated with an artificial neural network. Complex Systems 5(2), 136–147 (1999)

    Google Scholar 

  6. Raudys, S.: Evolution and generalization of a single neurone. I. SLP as seven statistical classifiers. Neural Networks 11(2), 283–296 (1998)

    Article  Google Scholar 

  7. Raudys, S.: Statistical and Neural Classifiers: An integrated approach to design. Springer, NY (2001)

    MATH  Google Scholar 

  8. Raudys, S.: An adaptation model for simulation of aging process. Int. J. of Modern Physiscs, C. 13(8), 1075–1086 (2002)

    Article  Google Scholar 

  9. Haykin, S.: Neural Networks: A comprehensive foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Raudys, Š., Justickis, V. (2003). Yerkes-Dodson Law in Agents’ Training. In: Pires, F.M., Abreu, S. (eds) Progress in Artificial Intelligence. EPIA 2003. Lecture Notes in Computer Science(), vol 2902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24580-3_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24580-3_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20589-0

  • Online ISBN: 978-3-540-24580-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics