Regular Article
Underwater Video Mosaics as Visual Navigation Maps

https://doi.org/10.1006/cviu.2000.0848Get rights and content

Abstract

This paper presents a set of algorithms for the creation of underwater mosaics and illustrates their use as visual maps for underwater vehicle navigation. First, we describe the automatic creation of video mosaics, which deals with the problem of image motion estimation in a robust and automatic way. The motion estimation is based on a initial matching of corresponding areas over pairs of images, followed by the use of a robust matching technique, which can cope with a high percentage of incorrect matches. Several motion models, established under the projective geometry framework, allow for the creation of high quality mosaics where no assumptions are made about the camera motion. Several tests were run on underwater image sequences, testifying to the good performance of the implemented matching and registration methods. Next, we deal with the issue of determining the 3D position and orientation of a vehicle from new views of a previously created mosaic. The problem of pose estimation is tackled, using the available information on the camera intrinsic parameters. This information ranges from the full knowledge to the case where they are estimated using a self-calibration technique based on the analysis of an image sequence captured under pure rotation. The performance of the 3D positioning algorithms is evaluated using images for which accurate ground truth is available.

References (35)

  • P. Anandan

    A computational framework and an algorithm for the measurement of visual motion

    Internat. J. Comput. Vision

    (1989)
  • A. Arsénio, and, J. Marques, Performance analysis and characterization of matching algorithms. in, Proc. of the...
  • J. Batista, H. Araújo, and, A. Almeida, Iterative multi–step explicit camera calibration, in, Proc. of the 6th...
  • A. Branca, E. Stella, and, A. Distante, Autonomous navigation of underwater vehicles, in, Proc. of the IEEE OCEANS'98,...
  • C. Brown

    Tutorial on Filtering, Restoration and State Estimation

    (June 1995)
  • L. de Agapito, E. Hayman, and, I. Reid, Self-calibration of a rotating camera with varying intrinsic parameters, in,...
  • O. Faugeras

    Three Dimensional Computer Vision

    (1993)
  • O. Faugeras et al.

    Motion and structure from motion in a piecewise planar environment

    Internat. J. Pattern Recog. Artif. Intell.

    (1988)
  • M. Fischler et al.

    Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography

    Commun. ACM

    (1981)
  • S. Fleischer, R. Marks, S. Rock, and M. Lee, Improved real-time video mosaicking of the ocean floor, in Proc. of the...
  • S. Fleischer, and, S. Rock, Experimental validation of a real-time vision sensor and navigation system for intelligent...
  • S. Ganapathy, Decomposition of transformation matrices for robot vision, in Proc. 1st IEEE Conf. Robotics, IEEE, 1984,...
  • G. Golub et al.

    Matrix Computations

    (1989)
  • N. Gracias, Application of Robust Estimation to Computer Vision: Video Mosaics and 3-D Reconstruction, Master's thesis,...
  • C. Harris, Determination of ego-motion from matched points, in, Proceedings, Alvey Conference, Cambridge, UK,...
  • R. Hartley, Self-calibration from multiple views with a rotating camera, in Proc. of the 3rd. European Conference on...
  • R. Hartley

    Self-calibration from stationary cameras

    Internat. J. Comput. Vision

    (1997)
  • Cited by (145)

    • High precision underwater 3D imaging of non-cooperative target with frequency comb

      2022, Optics and Laser Technology
      Citation Excerpt :

      3D imaging refers to the formation of sampling points by emitting laser beams at the target, and then receiving echoes to obtain depth information (distance) and coordinates to perform 3D imaging of the target. In underwater application scenarios, such as seabed mapping, 3D shape detection, and pipeline inspection, high-precision 3D imaging is increasingly indispensable [27–29]. With the growing focus on ocean exploration worldwide, high-precision underwater 3D imaging is becoming more critical.

    • MARESye: A hybrid imaging system for underwater robotic applications

      2020, Information Fusion
      Citation Excerpt :

      On the contrary, optical systems can provide dense and accurate texture information updated at high speed, which is crucial for tasks such as, underwater manipulation, oceanographic studies and marine biology, shipwreck surveys and object identification. Although optical systems can increase the feasibility of missions can be carried out by AUVs [10], they still need to overcome the photometric limitations of sub-sea conditions namely, non-uniform lighting and colour filtering, poor visibility due to light attenuation and scattering which is caused by suspended particles in the medium or the abundance of marine life [1,7]. Generic and conventional systems for 3D reconstruction are usually classified into passive or active imaging techniques.

    • Considering the rates of growth in two taxa of coral across Pacific islands

      2020, Advances in Marine Biology
      Citation Excerpt :

      As a result, there is still much to be learned regarding the geographic and temporal variability of community-level coral demography. For over a decade, marine scientists interested in questions at scales relevant to community ecology have taken advantage of the proliferation of digital photography, developing software platforms that blend thousands of individual images together to create detailed two-dimensional orthorectified imagery (Gracias and Santos-Victor, 2000; Lirman et al., 2007) and 3-dimensional reconstructions of benthic habitats using platforms such as Structure from Motion (SfM) software (Burns et al., 2015; Ferrari et al., 2016; Pizarro et al., 2009). More recently, the robust software platforms and advanced computational infrastructure necessary to generate these products for large areas has become more accessible.

    View all citing articles on Scopus
    f1

    [email protected], [email protected]

    1

    The work described in this paper has been supported by the Portuguese Foundation for Science and Technology PRAXIS XXI BD/13772/97 and Esprit-LTR Proj. 30185, NARVAL.

    View full text