Elsevier

Neurocomputing

Volumes 44–46, June 2002, Pages 133-139
Neurocomputing

Coherence detection in a spiking neuron via Hebbian learning

https://doi.org/10.1016/S0925-2312(02)00374-0Get rights and content

Abstract

It is generally assumed that neurons communicate through temporal firing patterns. As a first step, we will study the learning of a layer of realistic neurons in the particular case where the relevant messages are formed by temporally correlated patterns, or synfire patterns. The model is a layer of integrate-and-fire neurons with synaptic current dynamics that adapts by minimizing a cost according to a gradient descent scheme. The cost we define leads to a rule similar to spike-time dependent Hebbian plasticity. Moreover, our results show that the rule that we derive is biologically plausible and leads to the detection of the coherence in the input in an unsupervised way. An application to shape recognition is shown as an illustration.

Section snippets

Coding scheme

We will represent (as in [2]) the signal Si at synapse i by the sum of Dirac pulses located at the spiking times tik drawn from the lists of spikes Γi (see Fig. 1-left).Si=k∈Γiδ(t−tik).Synfire patterns are generated in analogy with the response of a retina to flashed binary images. The input of the synapses is characterized as the output of single-synapse IF neurons responding to a specific binary input. This response may be described as the sum of two random point processes with different

Definition of the cost function

Based on neurophysiological studies, we set the following principles:

  • (1)

    the learning is associated with a spiking response: the nth learning step occurs at the nth output firing time tn,

  • (2)

    to discriminate between the different input patterns, the output voltage should be close to a winner-take-all configuration: the potential of the winning neuron (which we index j=jn) should be above threshold whereas other neurons should be hyperpolarized,

  • (3)

    economy of the total synaptic efficacy and current use

Numerical results

We implemented this model using discrete versions of the differential equations (forward Euler method) on a MATLAB system.

Conclusion

We have presented an original gradient descent method to find a learning rule for a layer of spiking neurons. The simplicity of the rule gives a new insight into the comprehension of the mechanism behind the observed STDHP. Further work is done for the detection of asynchronous patterns.

However, this study should be extended to more realistic spike trains (e.g. bursts), account for more complex behavior (e.g. facilitation and depression) and may be extended to population of neurons and

Acknowledgements

This work has been initiated during the EU Advanced Course in Computational Neuroscience. LP wish to thank its organizers, the teachers, the course-mates and my tutor, S. Panzeri.

Laurent Perrinet is a Ph.D. student in Computational Neuroscience, under the direction of Manuel Samuelides at the CERT-ONERA, Toulouse and in narrow collaboration with the team of Simon Thorpe at the CERCO-CNRS, Toulouse. He works on theoretical and simulated aspects of neural coding, especially on the implication of fast-categorization visual experiments. Working areas span learning (especially in spike timing dependant plasticity) and the statistics of natural images.

References (4)

There are more references available in the full text version of this article.

Cited by (3)

Laurent Perrinet is a Ph.D. student in Computational Neuroscience, under the direction of Manuel Samuelides at the CERT-ONERA, Toulouse and in narrow collaboration with the team of Simon Thorpe at the CERCO-CNRS, Toulouse. He works on theoretical and simulated aspects of neural coding, especially on the implication of fast-categorization visual experiments. Working areas span learning (especially in spike timing dependant plasticity) and the statistics of natural images.

View full text