Skip to main content

A Model of Neuronal Specialization Using Hebbian Policy-Gradient with “Slow” Noise

  • Conference paper
  • 1985 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Abstract

We study a model of neuronal specialization using a policy gradient reinforcement approach. (1) The neurons stochastically fire according to their synaptic input plus a noise term; (2) The environment is a closed-loop system composed of a rotating eye and a visual punctual target; (3) The network is composed of a foveated retina, a primary layer and a motoneuron layer; (4) The reward depends on the distance between the subjective target position and the fovea and (5) the weight update depends on a Hebbian trace defined according to a policy gradient principle. In order to take into account the mismatch between neuronal and environmental integration times, we distort the firing probability with a “pink noise” term whose autocorrelation is of the order of 100 ms, so that the firing probability is overestimated (or underestimated) for about 100 ms periods. The rewards occuring meanwhile assess the “value” of those elementary shifts, and modify the firing probability accordingly. Every motoneuron being associated to a particular angular direction, we test at the end of the learning process the preferred output of the visual cells. We find that accordingly with the observed final behavior, the visual cells preferentially excite the motoneurons heading in the opposite angular direction.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Watkins, C.J., Dayan, P.: Q-learning. Machine learning 8, 279–292 (1992)

    MATH  Google Scholar 

  2. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 229–256 (1992)

    MATH  Google Scholar 

  3. Bartlett, P.L., Baxter, J.: Hebbian synaptic modifications in spiking neurons that learn. Technical report, Research School of Information Sciences and Engineering, Australian National University (1999)

    Google Scholar 

  4. Sebastian Seung, H.: Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron 40, 1063–1073 (2003)

    Article  Google Scholar 

  5. Baras, D., Meir, R.: Reinforcement learning, spike time dependent plasticity and the bcm rule. Neural Computation 19(8), 2245–2279 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Gerstner, W., Kistler, W.: Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)

    Chapter  Google Scholar 

  7. Florian, R.V.: A reinforcement learning algorithm for spiking neural networks. In: Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2005), pp. 299–306 (2005)

    Google Scholar 

  8. Soula, H., Beslon, G., Mazet, O.: Spontaneous dynamics of asymmetric random recurrent spiking neural networks. Neural Computation 18, 60–79 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Sutton, R.S.: Learning to predict by the method of temporal differences. Machine learning 3, 9–44 (1988)

    Google Scholar 

  10. Hebb, D.: The Organization of behavior. Wiley, New York (1949)

    Google Scholar 

  11. Softky, W., Koch, C.: The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. J. of Neuroscience 13(1), 334–450 (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Daucé, E. (2009). A Model of Neuronal Specialization Using Hebbian Policy-Gradient with “Slow” Noise. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_23

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics