Skip to main content
Log in

Epipolar Rectification by Singular Value Decomposition of Essential Matrix

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Image rectification is an important stage of applying a pair of projective transformations, or homographies, to a pair of stereo images, so that epipolar lines in the original images map to horizontally aligned lines in the rectified images. Considering that for some stereo rigs the intrinsic parameters of the cameras are known but their external parameters are unknown, in this paper, we present a novel method for stereo rectification based on the essential matrix which is derived from the fundamental matrix. Without any optimization process, closed-form analytical solutions of the projective transformations for epipolar rectification can be directly obtained by conducting SVD on the essential matrix. Experimental results show the proposed rectification method not only has higher efficiency and rectification precision, but also its scale invariance and robustness are superior to existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Available from http://perso.lcpc.fr/tarel.jean-philippe/syntim/paires.html

References

  1. Al-Zahrani A, Ipson SS, Haigh JGB (2004) Applications of a direct algorithm for the rectification of uncalibrated images[J]. Inf Sci 160(1):53–71

    Article  MathSciNet  Google Scholar 

  2. Bae KR, Moon B (2017) An accurate and cost-effective stereo matching algorithm and processor for real-time embedded multimedia systems[J]. Multimedia Tools and Applications 76(17):17907–17922

  3. Banno A, Ikeuchi K (2012) Estimation of F-Matrix and image rectification by double quaternion[J]. Inf Sci 183(1):140–150

    Article  Google Scholar 

  4. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-Up Robust Features (SURF)[J]. Computer Vision & Image Understanding 110(3):346–359

    Article  Google Scholar 

  5. Bouguet JY (2000) Matlab camera calibration toolbox. http:\\www.vision.caltech.edu\bouguetj\calib_doc

  6. Chen X, Zhao Y (2015) A linear approach for determining camera intrinsic parameters using tangent circles[J]. Multimedia Tools and Applications 74(15):5709–5723

    Article  Google Scholar 

  7. Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography[J]. Commun ACM 24(6):381–395

    Article  MathSciNet  Google Scholar 

  8. Fusiello A, Irsara L (2011) Quasi-Euclidean epipolar rectification of uncalibrated images[J]. Mach Vis Appl 22(4):663–670

    Article  Google Scholar 

  9. Fusiello A, Trucco E, Verri A (2000) A compact algorithm for rectification of stereo pairs[J]. Mach Vis Appl 12(1):16–22

    Article  Google Scholar 

  10. Gluckman J, Nayar SK (2001) Rectifying transformations that minimize resampling effects[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA, vol. 1, pp 111–117

  11. Hartley R (1997) In defense of the eight-point algorithm [J]. IEEE Trans Pattern Anal Mach Intell 19(6):580–593

    Article  Google Scholar 

  12. Hartley R (1999) Theory and practice of projective rectification[J]. Int J Comput Vis 35(2):115–127

    Article  Google Scholar 

  13. Hartley R, Zisserman A (2000) Multiple view geometry in computer vision[M]. Cambridge University Press, Cambridge

  14. Heyden A, Pollefeys M (2005) Multiple view geometry[J]. Emerging topics in computer vision, pages 45–107

  15. Hirschmuller H, Scharstein D (2007) Evaluation of cost functions for stereo matching[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, pp 1–8

  16. Isgrò F, Trucco E (1999) Projective rectification without epipolar geometry[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, vol. 1, pp 94–99

  17. Kang YS, Ho YS (2011) An efficient image rectification method for parallel multi-camera arrangement [J]. IEEE Trans Consum Electron 57(3):1041–1048

    Article  Google Scholar 

  18. Ko H, Shim HS, Choi O et al (2017) Robust uncalibrated stereo rectification with constrained geometric distortions (USR-CGD)[J]. Image Vis Comput 60:98–114

    Article  Google Scholar 

  19. Kumar S, Micheloni C, Piciarelli C (2010) Stereo rectification of uncalibrated and heterogeneous images [J]. Pattern Recogn Lett 31:1445–1452

    Article  Google Scholar 

  20. Liu H, Zhu Z, Yao L et al (2016) Epipolar rectification method for a stereovision system with telecentric cameras[J]. Opt Lasers Eng 83:99–105

    Article  Google Scholar 

  21. Loop C, Zhang Z (1999) Computing rectifying homographies for stereo vision[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, vol. 1, pp 125–131

  22. Mallon J, Whelan PF (2005) Projective rectification from the fundamental matrix[J]. Image Vis Comput 23(7):643–650

    Article  Google Scholar 

  23. Papadimitriou DV, Dennis TJ (1996) Epipolar line estimation and rectification for stereo image pairs[J]. IEEE Trans Image Process 5(4):672–676

    Article  Google Scholar 

  24. Pollefeys M, Van Gool L, Vergauwen M et al (2004) Visual modeling with a hand-held camera[J]. Int J Comput Vis 59(3):207–232

    Article  Google Scholar 

  25. Slama C (1980) Manual of photogrammetry[M]. American Society of Photogrammetry, Falls Church

  26. Sun C (2003) Uncalibrated three-view image rectification[J]. Image Vis Comput 21(3):259–269

    Article  Google Scholar 

  27. Yang Q, Wang L, Yang R et al (2009) Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling[J]. IEEE Trans Pattern Anal Mach Intell 31(3):492–504

    Article  MathSciNet  Google Scholar 

  28. Yuan Y (2000) A review of trust region algorithms for optimization[C]. ICIAM 99:271–282

    MathSciNet  MATH  Google Scholar 

  29. Zhang Z (1998) Determining the Epipolar Geometry and Its Uncertainty: A Review[J]. Int J Comput Vis 27(2):161–195

    Article  Google Scholar 

  30. Zhang Z (1999) Flexible camera calibration by viewing a plane from unknown orientations[C]. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, vol. 1, pp 666–673

  31. Zhang HB, Yuan K, Zhou QR (2004) Visual navigation of an automated guided vehicle based on path recognition[C]. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics 6:3877–3881

    Google Scholar 

Download references

Acknowledgements

This research was supported by National Natural Science Foundation of China (No. 61332015, No. 61402320);by Research project of Hubei Provincial Department of Education (B2017080),China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Zhu.

Appendix A

Appendix A

Proposition

For any non-zero vector t = [tx, ty, tz]T, the symmetric matrix \( \mathrm{A}={\left[\mathbf{t}\right]}_{\times }{\left[\mathbf{t}\right]}_{\times}^{\mathrm{T}} \) has two equal eigenvalues which are equal to the square of the norm of vector t, and its third eigenvalue is 0.

Proof

Due to\( \mathrm{A}\mathbf{t}=\Big({\left[\mathbf{t}\right]}_{\times}\left[\mathbf{t}\Big]{}_{\times}^{\mathrm{T}}\right)\mathbf{t}=0 \), it can be concluded that zero is an eigenvalue of the symmetric matrix A as well as its eigenvector corresponding to zero is t. Furthermore, A can be written as

$$ \kern0.5em \mathbf{A}={\left[\mathbf{t}\right]}_{\times }{\left[\mathbf{t}\right]}_{\times}^{\mathrm{T}}=-{\left[\mathbf{t}\right]}_{\times }{\left[\mathbf{t}\right]}_{\times }=-\left[\begin{array}{ccc}\hfill -\left({\mathrm{t}}_{\mathrm{z}}^2+{\mathrm{t}}_{\mathrm{y}}^2\right)\hfill & \hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{y}}\hfill & \hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{z}}\hfill \\ {}\hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{y}}\hfill & \hfill -\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{z}}^2\right)\hfill & \hfill {\mathrm{t}}_{\mathrm{y}}{\mathrm{t}}_{\mathrm{z}}\hfill \\ {}\hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{z}}\hfill & \hfill {\mathrm{t}}_{\mathrm{y}}{\mathrm{t}}_{\mathrm{z}}\hfill & \hfill -\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2\right)\hfill \end{array}\right] $$

Let λ 1 = 0 , λ 2 , λ 3 are the three eigenvalues of A, it follows that

$$ {\displaystyle \begin{array}{l}\left|\uplambda \mathbf{I}-\mathbf{A}\right|=\left|\begin{array}{ccc}\hfill \uplambda -\left({\mathrm{t}}_{\mathrm{z}}^2+{\mathrm{t}}_{\mathrm{y}}^2\right)\hfill & \hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{y}}\hfill & \hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{z}}\hfill \\ {}\hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{y}}\hfill & \hfill \uplambda -\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{z}}^2\right)\hfill & \hfill {\mathrm{t}}_{\mathrm{y}}{\mathrm{t}}_{\mathrm{z}}\hfill \\ {}\hfill {\mathrm{t}}_{\mathrm{x}}{\mathrm{t}}_{\mathrm{z}}\hfill & \hfill {\mathrm{t}}_{\mathrm{y}}{\mathrm{t}}_{\mathrm{z}}\hfill & \hfill \uplambda -\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2\right)\hfill \end{array}\right|\hfill \\ {}=\uplambda \left(\uplambda -{\uplambda}_2\right)\left(\uplambda -{\uplambda}_3\right)={\uplambda}^3-\left({\uplambda}_2+{\uplambda}_3\right){\uplambda}^2+{\uplambda}_2{\uplambda}_3\uplambda =0\hfill \end{array}} $$

Then we have that

$$ {\displaystyle \begin{array}{l}{\uplambda}_2+{\uplambda}_3={\mathrm{A}}_{11}+{\mathrm{A}}_{22}+{\mathrm{A}}_{33}=2\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2+{\mathrm{t}}_{\mathrm{z}}^2\right)\hfill \\ {}{\uplambda}_2{\uplambda}_3={\mathrm{t}}_{\mathrm{x}}^2{\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2{\mathrm{t}}_{\mathrm{y}}^2+{\mathrm{t}}_{\mathrm{z}}^2{\mathrm{t}}_{\mathrm{z}}^2+2{\mathrm{t}}_{\mathrm{x}}^2{\mathrm{t}}_{\mathrm{y}}^2+2{\mathrm{t}}_{\mathrm{y}}^2{\mathrm{t}}_{\mathrm{z}}^2+2{\mathrm{t}}_{\mathrm{x}}^2{\mathrm{t}}_{\mathrm{z}}^2={\left({\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2+{\mathrm{t}}_{\mathrm{z}}^2\right)}^2\hfill \end{array}} $$

It can be easily obtained from above two equations that

$$ {\uplambda}_2={\uplambda}_3={\mathrm{t}}_{\mathrm{x}}^2+{\mathrm{t}}_{\mathrm{y}}^2+{\mathrm{t}}_{\mathrm{z}}^2={\mathbf{t}}^2 $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, W., Zhu, H. & Zhang, Q. Epipolar Rectification by Singular Value Decomposition of Essential Matrix. Multimed Tools Appl 77, 15747–15771 (2018). https://doi.org/10.1007/s11042-017-5149-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5149-0

Keywords

Navigation