Abstract
We present a process that allows to learn complex spatiotemporal signals, in a large random neural system with a self-generated chaotic dynamics. Our system modelizes a generic sensory structure. We study the interplay between a periodic spatio-temporal stimulus, i.e a sequence of spatial patterns (which are not necessarily orthogonal), and the inner dynamics, while a Hebbian learning rule slowly modifies the weights. Learning progressively stabilizes the initial chaotic dynamical behavior, and tends to produce a reliable resonance between the inner dynamics and the outer signal. Later on, when a learned stimulus is presented, the inner dynamics becomes regular, so that the system is able to simulate and predict in real time the evolution of its input signal. On the contrary, when an unknown stimulus is presented, the dynamics remains chaotic and non-specific.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Daucé, E., Quoy, M., Cessac, B., Doyon, B., and Samuelides, M.: Self-organization and dynamics reduction in recurrent networks: stimulus presentation and learning. Neural Networks, Vol. 11 (1998) 521–533
Daucé, E. and Quoy, M.: Random Recurrent Neural Networks for autonomous system design. In: Meyer, J.-A. et al. (ed.): SAB2000 proceedings supplement. ISAB, 2430 Campus Road, Honolulu, USA (2000)
Doyon, B., Cessac, B., Quoy, M., and Samuelides, M.: Control of the transition to chaos in neural networks with random connectivity. Int. J. of Bif. and Chaos, Vol. 32 (1993) 279–291
Elman, J. L.: Finding structure in time. Cognitive Science, Vol. 14 (1990) 179–211
Herz, A., Sulzer, B., Kuhn, R., and van Hemmen, J. L: Hebbian learning reconsidered: Representation of static and dynamic objects in associative neural nets. Biol. Cybern., Vol. 60 (1989) 457–467
Hopfield, J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci., Vol. 79 (1982) 2554–2558
Sejnowski, T. J.: Storing covariance with nonlinearly interacting neurons. Journal of Mathematical biology, Vol. 4 (1977) 303–321
Skarda, C. and Freeman, W.: How brains make chaos in order to make sense of the world. Behav. Brain Sci., 10:(1987) 161–195
Williams, R. J. and Zipser, D: A learning algorithm for continually running recurrent neural networks. Neural Computation Vol. 12 (1989) 270–280.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Daucé, E. (2001). Learning from Chaos: A Model of Dynamical Perception. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_157
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_157
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive