Model-driven active visual tracking

https://doi.org/10.1016/S1077-2014(98)90004-3Get rights and content

Abstract

We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable camera geometry. Many existent active tracking algorithms avoid the problem of variable camera geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the camera geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space.

We use a four degree-of-freedom (4-DoF) binocular camera rig to track three focus features of an industrial object, whose complete model is known. The camera geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space.

The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of camera geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model geometry constraint in 3D space.

An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head.

Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.

References (15)

  • H.P. Trivedi

    Estimation of stereo and motion parameters using a variational principle

    Image and Vision Computing

    (1987)
  • Y. Shao et al.

    Object tracking and localisation in Bayesian reasoning architecture

  • I.D. Reid et al.

    Active tracking of foveating feature cluster using affine structure

    Int'l Journal Computer Vision

    (1996)
  • J.C. Clarke et al.

    Detecting and tracking linear features efficiently

  • D.W. Murray et al.

    Steering and navigation behaviours using fixation

  • A. Blake et al.

    A framework for spatiotemporal control in the tracking of visual contours

    Int'l Journal of Computer Vision.

    (1993)
  • P.A. Couvignou et al.

    The use of active deformable models in model-based robotics visual servoing

    Journal of Intelligent & Robotics Systems

    (1996)
There are more references available in the full text version of this article.

Cited by (1)

  • Image analysis and computer vision: 1998

    1999, Computer Vision and Image Understanding

Formerly with Al Vision Research Unit, University of Sheffield, U.K.

View full text