Temporal Infomax on Markov chains with input leads to finite state automata
Introduction
One of the most basic questions in computational neuroscience is that for the nature of neural codes. Experiments suggest a considerable interaction of neurons already on the level of spikes, e.g., expressed by spatio-temporal correlations [1], [4], [6], [7]. A well-known measure that quantifies relations of interacting units is the so-called mutual information: The Kullback–Leibler divergencewhere H(·) denotes the Shannon entropy and pν the νth marginal of p measures the “distance” of p from the factorized distribution p1⊗⋯⊗pN. It is a natural measure for “spatial” interdependence of N stochastic units and a starting point of recent approaches to neural complexity [5], [6]. In order to capture intrinsically temporal aspects of dynamic interaction, I in (1) has been extended by Ay [2] to the dynamical setting of Markov chains, where it is referred to as (stochastic) interaction. The optimization of stochastic interaction in Markov chains has been shown to result in almost deterministic dynamical systems [3]. This work neglected external input into the considered systems. The present study, therefore, presents optimized Markov chains, where a part of the system is clamped to externally prescribed stochastic processes. Surprisingly, the optimized processes turn out to be finite state automata.
Section snippets
Temporal infomax on constrained Markov chains
Consider a set V={1,…,N} of binary units with state sets , ν∈V. For a subsystem A⊂V, denotes the set of all configurations restricted to A, and is the set of probability distributions on . Given two subsets A and B, where B is non-empty, is the set of all Markov transition kernels from to . If A=B we use the abbreviation . For a probability distribution and a Markov kernel , the conditional entropy of (p,K) is defined as
Simulations
The simulations displayed in the following implement the usual Markov dynamics on N binary units together with a random search scheme to optimize the stochastic interaction of the Markov chains: The interaction, I, is computed with respect to an induced stationary probability distribution of a parallel Markov kernel, and starting from initial random values the kernel is iteratively perturbed such that I increases (cf. [3]). In contrast to [3], however, the optimization here is not
Thomas Wennekers studied physics at the University of Düsseldorf and computer science at the University of Ulm, where he received a Ph.D. in 1998. At present he is doing research at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig in the fields of computational neuroscience and brain theory.
References (8)
Corticonics: neural circuits of the cerebral cortex
(1991)- N. Ay, Information geometry on complexity and stochastic interaction, 2002, submitted for...
- N. Ay, T. Wennekers, Dynamical properties of strongly interacting Markov chains, 2002, submitted for...
Neural mechanisms of scene segmentationrecordings from the visual cortex suggest basic circuits for linking field models
IEEE Trans. Neural Networks
(1999)
Cited by (3)
Spatio-temporal pattern recognisers using spiking neurons and spiketiming dependent plasticity
2012, Frontiers in Computational Neuroscience
Thomas Wennekers studied physics at the University of Düsseldorf and computer science at the University of Ulm, where he received a Ph.D. in 1998. At present he is doing research at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig in the fields of computational neuroscience and brain theory.
Nihat Ay studied Mathematics at the Ruhr-University Bochum and obtained a Ph.D. in Mathematics from the University of Leipzig in 2001. He currently works at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig on information geometry and its applications in complex adaptive systems.