Elsevier

Neurocomputing

Volumes 52–54, June 2003, Pages 431-436
Neurocomputing

Temporal Infomax on Markov chains with input leads to finite state automata

https://doi.org/10.1016/S0925-2312(02)00862-7Get rights and content

Abstract

Information maximization between stationary input and output activity distributions of neural ensembles has been a guiding principle in the study of neural codes. We have recently extended the approach to the optimization of information measures that capture spatial and temporal signal properties. Unconstrained Markov chains that optimize these measures have been shown to be almost deterministic. In the present work we consider the optimization of stochastic interaction in constrained Markov chains where part of the units are clamped to prescribed processes. Temporal Infomax in that case leads to finite state automata.

Introduction

One of the most basic questions in computational neuroscience is that for the nature of neural codes. Experiments suggest a considerable interaction of neurons already on the level of spikes, e.g., expressed by spatio-temporal correlations [1], [4], [6], [7]. A well-known measure that quantifies relations of interacting units is the so-called mutual information: The Kullback–Leibler divergenceI(p)≔D(p||p1⊗⋯⊗pN)=ν=1NH(pν)−H(p),where H(·) denotes the Shannon entropy and pν the νth marginal of p measures the “distance” of p from the factorized distribution p1⊗⋯⊗pN. It is a natural measure for “spatial” interdependence of N stochastic units and a starting point of recent approaches to neural complexity [5], [6]. In order to capture intrinsically temporal aspects of dynamic interaction, I in (1) has been extended by Ay [2] to the dynamical setting of Markov chains, where it is referred to as (stochastic) interaction. The optimization of stochastic interaction in Markov chains has been shown to result in almost deterministic dynamical systems [3]. This work neglected external input into the considered systems. The present study, therefore, presents optimized Markov chains, where a part of the system is clamped to externally prescribed stochastic processes. Surprisingly, the optimized processes turn out to be finite state automata.

Section snippets

Temporal infomax on constrained Markov chains

Consider a set V={1,…,N} of binary units with state sets Ων={0,1}, νV. For a subsystem AV, ΩA≔{0,1}A denotes the set of all configurations restricted to A, and P̄A) is the set of probability distributions on ΩA. Given two subsets A and B, where B is non-empty, K̄BA) is the set of all Markov transition kernels from ΩA to ΩB. If A=B we use the abbreviation K̄A). For a probability distribution p∈P̄A) and a Markov kernel K∈K̄A), the conditional entropy of (p,K) is defined asH(p,K)=−

Simulations

The simulations displayed in the following implement the usual Markov dynamics on N binary units together with a random search scheme to optimize the stochastic interaction of the Markov chains: The interaction, I, is computed with respect to an induced stationary probability distribution of a parallel Markov kernel, and starting from initial random values the kernel is iteratively perturbed such that I increases (cf. [3]). In contrast to [3], however, the optimization here is not

Thomas Wennekers studied physics at the University of Düsseldorf and computer science at the University of Ulm, where he received a Ph.D. in 1998. At present he is doing research at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig in the fields of computational neuroscience and brain theory.

References (8)

  • M. Abeles

    Corticonics: neural circuits of the cerebral cortex

    (1991)
  • N. Ay, Information geometry on complexity and stochastic interaction, 2002, submitted for...
  • N. Ay, T. Wennekers, Dynamical properties of strongly interacting Markov chains, 2002, submitted for...
  • R. Eckhorn

    Neural mechanisms of scene segmentationrecordings from the visual cortex suggest basic circuits for linking field models

    IEEE Trans. Neural Networks

    (1999)
There are more references available in the full text version of this article.

Thomas Wennekers studied physics at the University of Düsseldorf and computer science at the University of Ulm, where he received a Ph.D. in 1998. At present he is doing research at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig in the fields of computational neuroscience and brain theory.

Nihat Ay studied Mathematics at the Ruhr-University Bochum and obtained a Ph.D. in Mathematics from the University of Leipzig in 2001. He currently works at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig on information geometry and its applications in complex adaptive systems.

View full text