Skip to main content

Learning from Chaos: A Model of Dynamical Perception

  • Conference paper
  • First Online:
Book cover Artificial Neural Networks — ICANN 2001 (ICANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2130))

Included in the following conference series:

Abstract

We present a process that allows to learn complex spatiotemporal signals, in a large random neural system with a self-generated chaotic dynamics. Our system modelizes a generic sensory structure. We study the interplay between a periodic spatio-temporal stimulus, i.e a sequence of spatial patterns (which are not necessarily orthogonal), and the inner dynamics, while a Hebbian learning rule slowly modifies the weights. Learning progressively stabilizes the initial chaotic dynamical behavior, and tends to produce a reliable resonance between the inner dynamics and the outer signal. Later on, when a learned stimulus is presented, the inner dynamics becomes regular, so that the system is able to simulate and predict in real time the evolution of its input signal. On the contrary, when an unknown stimulus is presented, the dynamics remains chaotic and non-specific.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 189.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Daucé, E., Quoy, M., Cessac, B., Doyon, B., and Samuelides, M.: Self-organization and dynamics reduction in recurrent networks: stimulus presentation and learning. Neural Networks, Vol. 11 (1998) 521–533

    Article  Google Scholar 

  2. Daucé, E. and Quoy, M.: Random Recurrent Neural Networks for autonomous system design. In: Meyer, J.-A. et al. (ed.): SAB2000 proceedings supplement. ISAB, 2430 Campus Road, Honolulu, USA (2000)

    Google Scholar 

  3. Doyon, B., Cessac, B., Quoy, M., and Samuelides, M.: Control of the transition to chaos in neural networks with random connectivity. Int. J. of Bif. and Chaos, Vol. 32 (1993) 279–291

    Article  MATH  Google Scholar 

  4. Elman, J. L.: Finding structure in time. Cognitive Science, Vol. 14 (1990) 179–211

    Article  Google Scholar 

  5. Herz, A., Sulzer, B., Kuhn, R., and van Hemmen, J. L: Hebbian learning reconsidered: Representation of static and dynamic objects in associative neural nets. Biol. Cybern., Vol. 60 (1989) 457–467

    Article  MATH  Google Scholar 

  6. Hopfield, J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci., Vol. 79 (1982) 2554–2558

    Article  MathSciNet  Google Scholar 

  7. Sejnowski, T. J.: Storing covariance with nonlinearly interacting neurons. Journal of Mathematical biology, Vol. 4 (1977) 303–321

    Article  Google Scholar 

  8. Skarda, C. and Freeman, W.: How brains make chaos in order to make sense of the world. Behav. Brain Sci., 10:(1987) 161–195

    Article  Google Scholar 

  9. Williams, R. J. and Zipser, D: A learning algorithm for continually running recurrent neural networks. Neural Computation Vol. 12 (1989) 270–280.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Daucé, E. (2001). Learning from Chaos: A Model of Dynamical Perception. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_157

Download citation

  • DOI: https://doi.org/10.1007/3-540-44668-0_157

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42486-4

  • Online ISBN: 978-3-540-44668-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics