Skip to main content

Beyond the Epipolar Constraint: Integrating 3D Motion and Structure Estimation

  • Conference paper
  • First Online:
3D Structure from Multiple Images of Large-Scale Environments (SMILE 1998)

Abstract

This paper develops a novel solution to the problem of recovering the structure of a scene given an uncalibrated video sequence depicting the scene. The essence of the technique lies in a method for recovering the rigid transformation between the different views in the image sequence. Knowledge of this 3D motion allows for self-calibration and for subsequent recovery of 3D structure. The introduced method breaks away from applying only the traditionally used epipolar constraint and introduces a new constraint based on the interaction between 3D motion and shape.

Up to now, structure from motion algorithms proceeded in two well defined steps, where the first and most important step is recovering the rigid transformation between two views, and the subsequent step is using this transformation to compute the structure of the scene in view. Here both aforementioned steps are accomplished in a synergistic manner. Existing approaches to 3D motion estimation are mostly based on the use of optic flow which however poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) estimate accurately flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. In the past this dilemma has been addressed by improving the estimation of flow through sophisticated optimization techniques, whose performance often depends on the scene in view. In this paper the main idea is based on the interaction between 3D motion and shape which allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, then we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that are locally unsmooth, i.e., they vary more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the “smoothest” estimated depth as well as the image locations of scene discontinuities. Although no optic flow values are computed, we show that our algorithm is very much related to minimizing the epipolar constraint when the scene in view is smooth. When however the imaged scene is not smooth, the introduced constraint has in general different properties from the epipolar constraint and we present experimental results with real sequences where it performs better.

The support of the Office of Naval Research under Contract N00014-96-1-0587, and IBM under Grant 50000293, is gratefully acknowledged.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. Int. Journal of Computer Vision, 12(1):43–77, 1994.

    Article  Google Scholar 

  2. T. Brodský, C. Fermüller, and Y. Aloimonos. Directions of motion fields are hardly ever ambiguous. Int. Journal of Computer Vision, 26:5–24, 1998.

    Article  Google Scholar 

  3. T. Brodský, C. Fermüller, and Y. Aloimonos. Self-calibration from image derivatives. In Proc. Int. Conf. Computer Vision, pages 83–89, 1998.

    Google Scholar 

  4. L.-F. Cheong, C. Fermüller, and Y. Aloimonos. Effects of errors in viewing geometry on shape estimation. Computer Vision and Image Understanding, 1996. In press. Earlier version available as technical report CS-TR-3480, June 1996.

    Google Scholar 

  5. K. Daniilidis and M. Spetsakis. Understanding noise sensitivity in structure from motion. In Y. Aloimonos, editor, Visual Navigation: From Biological Systems to Unmaned Ground Vehicles, chapter 4. Lawrence Erlbaum, 1996.

    Google Scholar 

  6. O. D. Faugeras. Three-Dimensional Computer Vision. MIT Press, Cambridge, MA, 1992.

    Google Scholar 

  7. C. Fermüller and Y. Aloimonos. What is computed by structure from motion algorithms? Technical Report CS-TR-3809, Center for Automation Research, University of Maryland, College Park, 1997.

    Google Scholar 

  8. S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Trans. PAMI, 6:721–741, 1984.

    MATH  Google Scholar 

  9. B. K. P. Horn. Motion fields are hardly ever ambiguous. Int. Journal of Computer Vision, 1(3):259–274, 1987.

    Article  Google Scholar 

  10. J. J. Koenderink and A. J. van Doorn. Affine structure from motion. Journal of the Optical Society of America, 8:377–385, 1991.

    Article  Google Scholar 

  11. B. D. Lucas. Generalized Image Matching by the Method of Differences. PhD thesis, Dept. of Computer Science, Carnegie-Mellon University, 1984.

    Google Scholar 

  12. Q.-T. Luong and O. D. Faugeras. The fundamental matrix: Theory, algorithms, and stability analysis. Int. Journal of Computer Vision, 17:43–75, 1996.

    Article  Google Scholar 

  13. J. Marroquin. Probabilistic Solution of Inverse Problems. PhD thesis, Massachusetts Institute of Technology, 1985.

    Google Scholar 

  14. S. J. Maybank. Algorithm for analysing optical flow based on the least-squares method. Image and Vision Computing, 4:38–42, 1986.

    Article  Google Scholar 

  15. S. J. Maybank. A Theoretical Study of Optical Flow. PhD thesis, University of London, England, 1987.

    Google Scholar 

  16. D. Mumford and J. Shah. Boundary detection by minimizing functionals. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 22–25, 1985.

    Google Scholar 

  17. T. Viéville and O. Faugeras. The first-order expansion of motion equations in the uncalibrated case. Computer Vision and Image Understanding, 64(1):128–146, 1996.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Brodský, T., Fermüller, C., Aloimonos, Y. (1998). Beyond the Epipolar Constraint: Integrating 3D Motion and Structure Estimation. In: Koch, R., Van Gool, L. (eds) 3D Structure from Multiple Images of Large-Scale Environments. SMILE 1998. Lecture Notes in Computer Science, vol 1506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49437-5_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-49437-5_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65310-3

  • Online ISBN: 978-3-540-49437-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics