INFANT neural controller for adaptive sensory-motor coordination

https://doi.org/10.1016/0893-6080(91)90001-LGet rights and content

Abstract

This review presents a theory and prototype for a neural controller called INFANT that learns sensory-motor coordination from its own experience. Three adaptive abilities are discussed: locating stationary targets with movable sensors; grasping arbitrarily positioned and oriented targets in 3D space with multijoint arms, and positioning an unforeseen payload with accurate and stable movements despite unknown sensor feedback delay. INFANT adapts to unforeseen changes in the geometry of the physical motor system, the internal dynamics of the control circuits and to the location, orientation, shape, weight, and size of objects. It learns to accurately grasp an elongated object with almost no information about the geometry of the physical sensory-motor system. This neural controller relies on the self-consistency between sensory and motor signals to achieve unsupervised learning. It is designed to be generalized for coordinating any number of sensory inputs with limbs of any number of joints. The principle theme of the review is how various geometries of interacting topographic neural fields can satisfy the constraints of adaptive behavior in complete sensory-motor circuits.

References (30)

  • A. Hein

    Acquiring components of visually guided behavior

  • A. Hein et al.

    A neural model for labile sensorimotor coordinations

  • R. Held et al.

    Movement-produced stimulation in the development of visually guided behavior

    Journal of Comparative Physiological Psychology

    (1963)
  • D.H. Hubel et al.

    Receptive fields, binocular interaction and functional architecture in the cat's visual cortex

    Journal of Physiology

    (1962)
  • M.I. Jordan

    Generic constraints on underspecitied target trajectories

  • Cited by (61)

    • Inverse Kinematics of Dextrous Manipulators

      2012, Neural Systems for Robotics
    • A developmental algorithm for ocular-motor coordination

      2010, Robotics and Autonomous Systems
      Citation Excerpt :

      Even when saccades have been learned they have often not been very closely aligned with existing psychological data and knowledge. For example, [39] addressed the problem of driving moveable visual sensors to locate static objects. The emphasis was on topographic mappings and artificial neural networks, but the neural controller needed 100,000 trials during training. [40]

    • A modular neural network architecture for step-wise learning of grasping tasks

      2007, Neural Networks
      Citation Excerpt :

      Recently, another kind of approach has emerged. This new approach is based on the utilization of neural networks to define grasping configurations or to learn the mapping from an object shape to a hand configuration or a grasp choice (Guigon, Grandguillaume, Otto, Boutkhil, & Burnod, 1994; Kuperstein, 1991; Taha, Brown, & Wright, 1997; Uno, Fukumura, Suzuki, & Kawato, 1999). The previous studies emphasize the correspondence between an object and a hand shape.

    • From behaviour-based robots to motivation-based robots

      2005, Robotics and Autonomous Systems
    View all citing articles on Scopus
    View full text