Stereokinematic analysis of visual data in active convergent stereoscopy

https://doi.org/10.1016/S0921-8890(98)00033-5Get rights and content

Abstract

The core contribution of this study is a mathematical model of combination of stereopsis and kineopsis under active viewing, a model to serve as a basis for algorithms implemental of active stereokinematic analysis of visual data.

The model is specified by two main groups of equations: (A) The fundamental equations of interpretation: they relate the unknowns of perception (depth and motion in space) to image variables (image positions and optical velocities) and viewing system control variables (angles of stereoscopic convergence and angular velocities of viewing system movement), in a neck-eyes as well as a neck-less mode of active viewing, (B) The equations of control regimes: they specify trategies of control of stereoscopic viewing system movement (neck-eyes and neck-less modes) in the form of expressions integratable with the fundamental equations.

The presentation of the model is prefaced by a motivational discussion, and the model's prerequisite background. It is postfaced by an account of some of its algorithmic implications.

References (54)

  • L. Robert et al.

    Dense depth map reconstruction using a multiscale regularization approach which preserves discontinuities

  • T. Aach et al.

    Estimation of physical disparity field and their discontinuities for stereoscopic images

  • K. Nakayama et al.

    Optical velocity patterns, velocity-sensitive neurons, and space perception

    Perception

    (1974)
  • J.J. Gibson

    The Perception of the Visual World

    (1950)
  • A. Mitiche

    Computational Analysis of Visual Motion

    (1994)
  • F. Heitz et al.

    Multimodal estimation of discontinuous optical flow using Markov random field

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1993)
  • J. Konrad et al.

    Bayesian estimation of motion vector field

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1992)
  • E.J. Gibson et al.

    Motion parallax as a determinant of perceived depth

    Journal of Experimental Psychology

    (1959)
  • A. Mitiche

    On combining stereopsis and kineopsis for space perception

  • A. Mitiche

    A computational approach to the fusion of stereopsis and kineopsis

  • W. Richards

    Structure from stereo and motion

    Journal of the Optical Society of America

    (1985)
  • E.B. Johnston et al.

    Integration of stereopsis and motion shape cues

    Vision Research

    (1994)
  • M. Nawrot et al.

    The interplay between stereopsis and structure from motion

    Perception and Psychophysics

    (1991)
  • A.M. Waxman et al.

    Binocular image flows, steps toward stereo-motion fusion

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1986)
  • R. Laganiére

    Analyse stéréocinétique d'une séquence d'images: Estimation des champs de movement et de disparité

  • Y. Altunbasak et al.

    Simultaneous motion-disparity estimation and segmentation from stereo

  • B. Chupeau

    A multiscale approach to the joint computation of motion and disparity. Application to the synthesis of intermediate view

  • Cited by (2)

    • Active estimation of distance in a robotic system that replicates human eye movement

      2007, Robotics and Autonomous Systems
      Citation Excerpt :

      Depth information also provides a fundamental contribution to the segmentation of images into their basic elements, one of the most serious challenges faced by machine vision systems. Since the process of reconstructing a 3-D scene from its projections on a 2-D sensor is inherently ambiguous, vision systems usually rely on cues that originate from the comparison of images taken either from different points of view (stereoscopic vision) [1–6] or in different instants of time (depth from motion) [7,2,3,8], as well as from a priori knowledge of the scene and its structure (depth from shading, size, occlusions, etc.) [9–12]. Not surprisingly, in nature, where greater accuracy of depth perception can mean the difference between capturing a prey or failing to survive, many species exhibit a striking precision in estimating distance by means of vision [13].

    • Direct estimation of dense scene flow and depth from a monocular sequence

      2014, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    View full text