Elsevier

Neurocomputing

Volumes 26–27, June 1999, Pages 729-734
Neurocomputing

A neural mechanism of feature binding based on the dynamical map theory in distributed coding scheme

https://doi.org/10.1016/S0925-2312(98)00141-6Get rights and content

Abstract

In order to understand a neural basis for the binding problem, we propose a neural network model that is constructed based on dynamical map theory. In the architecture, object stimulation activates sets of neurons of the different sensory networks, where all the features of an object are stored as point attractors in the networks. Relations among these features are encoded as a point attractor in the integration network. Model simulations show that dynamic linkages among point attractors across the different networks play a key role in solving the binding problem, in which the integration network serves to mediate the binding.

Introduction

At any moment we can easily perceive any individual object in the visual scene. It is highly reasonable to assume that seeing any one object often induces neuronal activities in many different visual areas because any object has different submodalities of visual features such as shape, orientation, colors, movement, and so on. The so-called binding problem arises from the question how these neurons across the different visual areas become active simultaneously as a unit under presentation of an object. An object could also have features belonging to the other kinds of sensory modalities such as hearing, smell, taste and feeling. Then another type of binding occurs across the different sensory cortical areas. Although various kinds of binding problems have been studied actively in different stages of neural information processing, the basic principle that governs these binding problems has not been clarified yet.

The aim of this study is to present a neural basis of systematical understanding of these binding problems. We propose a neural network model that is constructed based on “dynamical map theory” [8] and a hypothetical neural architecture proposed by Dammasio [1]. The model can deal with multimodal sensory information.

Section snippets

The neural network model

Associative neural networks with excitatory and inhibitory synapses were used for modeling two sensory areas (S1 and S2) and the integration area (G), where S1 and S2 belong to different sensory modalities. The principal neurons of the sensory networks (S1 and S2) receive simultaneously an external stimulus arising from an object and an input of sinusoidal wave arising from the global oscillation. The principal neurons of the sensory networks send feedforward projections to the principal

Formation of dynamical maps

In order to construct the dynamical maps extracting the relevant sensory features [8] in the sensory networks and the integration network, we considered two sets of sensory features (A1, B1, C1) and (A2, B2, C2) for the sensory networks S1 and S2, respectively. The sensory features Xn(X=A,B;n=1,2,3) are encoded into stationary firing patterns of relevant network [8]. We presented these firing patterns as input stimuli to the principal neurons of the sensory networks S1 and S2. We applied

Cognitive processes based on the dynamical maps

In order to find out how the dynamical maps work in solving binding problems, we carried out two typical cognitive tasks. One is object association task and the other is object segregation task. In general, object association is a process in which the whole image of an object is perceived by presenting partial information of the object to a neural system. The object segregation is a process in which individual images of objects that are simultaneously presented to a neural system are classified

Concluding remarks

We have shown [8] that the transition of dynamical state of sensory network from the randomly itinerant state to the point attractor can be induced by the short term synaptic modification under application of stimulus and this transition is a dynamical phase transition. The significance of dynamical phase transitions among various kinds of attractors has been proposed for various neural information processing, such as pattern recognition [4], [8], learning and memory [8], and odor

Osamu Hoshino received his Ph.D. in Biophysics from the University of Electro- Communications in 1998. He is presently Research Associate in the Department of Applied Physics and Chemistry at the University of Electro-Communications, Tokyo, Japan. His research interests include neural basis of brain function and animal behavior.

References (10)

  • O. Hoshino et al.

    Role of itinerancy among attractors as dynamical map in distributed coding scheme

    Neural Networks

    (1997)
  • A.R. Damasio

    The brain binds entities and events by multiregional activation from convergence zones

    Neural Computation

    (1989)
  • M.J. Eacott et al.

    Preserved recognition memory for small sets, and impaired stimulus identification for large sets, following rhinal cortex ablations in monkeys

    Eur. J. Neurosci.

    (1994)
  • D. Gaffan

    Dissociated effects of perirhinal cortex ablation, fornix transaction and amygdalectomy: evidence for multiple memory systems in the primate temporal lobe

    Exp. Brain Res.

    (1994)
  • O. Hoshino et al.

    Self-organized phase transitions in neural networks as a neural mechanism of information processing

    Proc. Natl. Acad. Sci. USA

    (1996)
There are more references available in the full text version of this article.

Osamu Hoshino received his Ph.D. in Biophysics from the University of Electro- Communications in 1998. He is presently Research Associate in the Department of Applied Physics and Chemistry at the University of Electro-Communications, Tokyo, Japan. His research interests include neural basis of brain function and animal behavior.

Yoshiki Kashimori received his Ph.D. from Osaka City University in 1985. He is a Research Associate in the Department of Applied Physics and Chemistry, at the University of Electro-Communications. His research interest is to clarify the neural mechanism of information processing in the electrosensory, auditory, and gustatory systems, based on modeling of neurons and their network. He investigates also the emergence of dynamical orders in various biological systems, based on the nonlinear dynamics.

Takeshi Kambara received his Ph.D. from Tokyo Institute of Technology in 1970. He is a Professor of Biophysics in the Department of Applied Physics and Chemistry and a Professor of Biological Information Science at the Graduate School of Information Systems, University of Electro-Communications. His scientific interests cover the neural mechanism of information processing in olfactory, auditory, visual, gustatory, and electro-sensory systems, and emergence of dynamical orders in various biological complex systems. His research work is made by using “in silico” method.

View full text