Skip to main content

Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons

  • Conference paper
Advances in Natural Computation (ICNC 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3611))

Included in the following conference series:

Abstract

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feed-forward spiking neuron networks (SpikeProp [1]) to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bohte, S., Kok, J., Poutré, H.L.: Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002)

    Article  MATH  Google Scholar 

  2. Siegelmann, H., Sontag, E.: On the computational power of neural nets. Journal of Computer and System Sciences 50, 132–150 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bengio, Y., Frasconi, P., Simard, P.: The problem of learning long-term dependencies in recurrent networks. In: Proceedings of the 1993 IEEE International Conference on Neural Networks, vol. 3, pp. 1183–1188 (1993)

    Google Scholar 

  4. Giles, C., Miller, C., Chen, D., Chen, H., Sun, G., Lee, Y.: Learning and extracting finite state automata with second–order recurrent neural networks. Neural Computation 4, 393–405 (1992)

    Article  Google Scholar 

  5. Casey, M.: The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation 8, 1135–1178 (1996)

    Article  Google Scholar 

  6. Gerstner, W.: Spiking neurons. In: Maass, W., Bishop, C. (eds.) Pulsed Coupled Neural Networks, pp. 3–54. MIT Press, Cambridge (1999)

    Google Scholar 

  7. Moore, S.: Back propagation in spiking neural networks. Master’s thesis, The University of Bath (2002)

    Google Scholar 

  8. Maass, W.: Lower bounds for the computational power of networks of spiking neurons. Neural Computation 8, 1–40 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  9. Natschläger, T., Maass, W.: Spiking neurons and the induction of finite state machines. Theoretical Computer Science: Special Issue on Natural Computing 287, 251–265 (2002)

    MATH  Google Scholar 

  10. Floreano, D., Zufferey, J., Nicoud, J.: From wheels to wings with evolutionary spiking neurons. Artificial Life 11, 121–138 (2005)

    Article  Google Scholar 

  11. Martignon, L., Deco, G., Laskey, K.B., Diamond, M., Freiwald, W., Vaadia, E.: Neural coding: Higher-order temporal patterns in the neurostatistics of cell assemblies. Neural Computation 12, 2621–2653 (2000)

    Article  Google Scholar 

  12. Nadas, A.: Replay and time compression of recurring spike sequences in the hippocampus. The Journal of Neuroscience 19, 9497–9507 (1999)

    Google Scholar 

  13. Gerstner, W.: Time structure of activity in neural network models. Phys. Rev. E 51, 738–758 (1995)

    Article  Google Scholar 

  14. Werbos, P.: Generalization of backpropagation with applications to a recurrent gas market model. Neural Networks 1, 339–356 (1989)

    Article  Google Scholar 

  15. Hopcroft, J., Ullman, J.: Introduction to automata theory, languages, and computation. Addison–Wesley, Reading (1979)

    MATH  Google Scholar 

  16. Forcada, M., Carrasco, R.: Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation 7, 923–930 (1995)

    Article  Google Scholar 

  17. Tiňo, P., Šajda, J.: Learning and extracting initial mealy machines with a modular neural network model. Neural Computation 7, 822–844 (1995)

    Article  Google Scholar 

  18. Lawrence, S., Giles, C., Fong, S.: Natural language grammatical inference with recurrent neural networks. IEEE Transactions on Knowledge and Data Engineering 12, 126–140 (2000)

    Article  Google Scholar 

  19. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87, 1423–1447 (1999)

    Article  Google Scholar 

  20. Puskorius, G., Feldkamp, L.: Recurrent network training with the decoupled extended Kalman filter. In: Proceedings of the 1992 SPIE Conference on the Science of Artificial Neural Networks, Orlando, Florida (1992)

    Google Scholar 

  21. Rowe, J., Hidovic, D.: An evolution strategy using a continuous version of the gray-code neighbourhood distribution. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 725–736. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  22. Maass, W., Markram, H.: Synapses as dynamic memory buffers. Neural Networks 15, 155–161 (2002)

    Article  Google Scholar 

  23. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14, 2531–2560 (2002)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tiňo, P., Mills, A. (2005). Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3611. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539117_95

Download citation

  • DOI: https://doi.org/10.1007/11539117_95

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28325-6

  • Online ISBN: 978-3-540-31858-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics