Abstract
Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves “a number of iterations is a function of stimulation” is caused by smoothly bounded nonlinearities of the perceptron’s activation function and a difference in desired outputs.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Yerkes, R.M., Dodson, J.D.: The relation of strength of stimulus to rapidity of habit-formation. J. Comparative Neurology and Psychology 18, 459–482 (1908)
Levitt, E.: The Psychology of Anxiety. Bobbs-Merrill, New York (1967)
Teigen, K.H.: Yerkes-Dodson – a law for all seasons. Theory and Psychology 4(4), 525–547 (1994)
Belavkin, R.: The role of emotion in problem solving. In: AISB 2001 Symposium on Emotion, Cognition and Affective Computing, Heslington, York, UK, pp. 49–57 (2001)
French, V.A., Anderson, E., Putman, G., Alvager, T.: The Yerkes-Dodson law simulated with an artificial neural network. Complex Systems 5(2), 136–147 (1999)
Raudys, S.: Evolution and generalization of a single neurone. I. SLP as seven statistical classifiers. Neural Networks 11(2), 283–296 (1998)
Raudys, S.: Statistical and Neural Classifiers: An integrated approach to design. Springer, NY (2001)
Raudys, S.: An adaptation model for simulation of aging process. Int. J. of Modern Physiscs, C. 13(8), 1075–1086 (2002)
Haykin, S.: Neural Networks: A comprehensive foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Raudys, Š., Justickis, V. (2003). Yerkes-Dodson Law in Agents’ Training. In: Pires, F.M., Abreu, S. (eds) Progress in Artificial Intelligence. EPIA 2003. Lecture Notes in Computer Science(), vol 2902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24580-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-540-24580-3_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20589-0
Online ISBN: 978-3-540-24580-3
eBook Packages: Springer Book Archive