Skip to main content
Log in

Achieving invariance to the temporal offset of unsynchronized cameras through epipolar point-line triangulation

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

This paper addresses the stereo camera synchronization problem for dynamic scenes by proposing a new triangulation method which is invariant to the temporal offset of the cameras. Contrary to spatio-temporal alignment approaches, our method estimates the correct positions of the tracked points without explicitly estimating the temporal offset of the cameras. The method relies on epipolar geometry. In the presence of a temporal delay, a tracked point does not lie on its corresponding epipolar line, thereby inducing triangulation errors. We propose to solve this problem by intersecting its motion trajectory with the corresponding epipolar line. The method we propose does not require calibrated cameras, since it can rely on the fundamental matrix. One major advantage of our approach is that the temporal offset can change throughout the sequence. Evaluated with synthetic and real data, our method proves to be stable and robust to varying temporal offset as well as complex motions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Dhond, U., Aggarwal, J.: Structure from stereo-a review. IEEE Trans. Syst. Man Cybern. 19(6), 1489–1510 (1989)

    Article  MathSciNet  Google Scholar 

  2. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)

    MATH  Google Scholar 

  3. Sivrikaya, F., Yener, B.: Time synchronization in sensor networks: a survey. Network IEEE 18(4), 45–50 (2004)

    Article  Google Scholar 

  4. URL: https://www.alliedvision.com/en/products/cameras/detail/680.html

  5. Faugeras, O., Luong, Q.T., Papadopoulou, T.: The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  6. Xu, G., Zhang, Z.: Epipolar Geometry in Stereo, Motion, and Object Recognition: A Unified Approach. Kluwer Academic Publishers, Norwell (1996)

    Book  MATH  Google Scholar 

  7. Bergen, J.R., Anandan, P., Hanna, K.J., Hingorani, R.: Hierarchical model-based motion estimation. In: Proceedings of the Second European Conference on Computer Vision, ser. ECCV ’92, pp 237–252. Springer-Verlag, London (1992). Available: URL: http://dl.acm.org/citation.cfm?id=645305.648720

  8. Szeliski, R., Shum, H.-Y.: Creating full view panoramic image mosaics and environment maps. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’97, pp. 251–258. ACM Press/Addison-Wesley Publishing Co., New York (1997). Available: doi:10.1145/258734.258861

  9. Zhang, Z., Deriche, R., Faugeras, O., Luong, Q.T.: A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artif. Intell. 78(1—-2), 87–119 (1995). doi:10.1016/0004-3702(95)00022-4

    Article  Google Scholar 

  10. Zoghlami, I., Faugeras, O., Deriche, R.: Using geometric corners to build a 2D mosaic from a set of image. In: Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR ’97), ser. CVPR ’97, p. 420. IEEE Computer Society, Washington, DC (1997). Available: http://dl.acm.org/citation.cfm?id=794189.794480

  11. Tuytelaars, T., Van Gool, L.: Synchronizing video sequences. In: Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 1, June 2004, pp. I-762-I-768 Vol. 1

  12. Arenas, A., Daz-Guilera, A., Kurths, J., Moreno, Y., Zhou, C.: Synchronization in complex networks. Phys. Rep. 469, 93–153 (2008)

    Article  MathSciNet  Google Scholar 

  13. Caspi, Y., Irani, M.: Alignment of non-overlapping sequences. In: IEEE International Conference on Computer Vision, July 7–14, pp. 76–83. Vancouver, British Columbia (2001)

  14. Caspi, Y., Simakov, D., Irani, M.: Feature-based sequence-to-sequence matching. Int. J. Comput. Vis. 68(1), 53–64 (2006). doi:10.1007/s11263-005-4842-z

    Article  Google Scholar 

  15. Velipasalar, S., Wolf, W.: Frame-level temporal calibration of video sequences from unsynchronized cameras by using projective invariants. In: Advanced Video and Signal Based Surveillance: AVSS 2005. IEEE Conference on, pp. 462–467 (2005)

  16. Svedman, M., Goncalves, L., Karlsson, N., Munich, M., Pirjanian, P.: Structure from stereo vision using unsynchronized cameras for simultaneous localization and mapping. In: Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on, pp. 3069–3074

  17. Padua, F., Carceroni, R., Santos, G., Kutulakos, K.: Linear sequence-to-sequence alignment. IEEE Trans. Pattern Anal. Mach. Intell 32(2), 304–320 (2010)

  18. Sinha, S., Pollefeys, M.: Synchronization and calibration of camera networks from silhouettes. In: Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 1, pp. 116–119

  19. Wolf, L., Zomet, A.: Sequence-to-sequence self calibration. In: European Conference on Computer Vision, pp. 370–382 (2002)

  20. Piao, Y., Sato, J.: Computing epipolar geometry from unsynchronized cameras. In: Image Analysis and Processing. ICIAP 2007. 14th International Conference on, pp. 475–480 (2007)

  21. Wolf, L., Zomet, A.: Wide baseline matching between unsynchronized video sequences. Int. J. Comput. Vis. 68(1), 43–52 (2006). doi:10.1007/s11263-005-4841-0

    Article  Google Scholar 

  22. Reid, I.D., Zisserman, A.: Goal-directed video metrology. In: Proceedings of the 4th European Conference on Computer Vision vol. II, ser. ECCV ’96, pp. 647–658. Springer-Verlag, London (1996). Available: http://dl.acm.org/citation.cfm?id=645310.649016

  23. Stein, G.: Tracking from multiple view points: self-calibration of space and time. In: Computer Vision and Pattern Recognition. IEEE Computer Society Conference on., vol. 1, p. 527 (1999)

  24. Sinha, S.N., Pollefeys, M.: Visual-hull reconstruction from uncalibrated and unsynchronized video streams. In: Proceedings of the 3D Data Processing, Visualization, and Transmission, 2nd International Symposium, ser. 3DPVT ’04, pp. 349–356. IEEE Computer Society, Washington, DC (2004). Available: doi: 10.1109/3DPVT.2004.143

  25. Zhou, C., Tao, H.: Dynamic depth recovery from unsynchronized video streams. In: Computer Vision and Pattern Recognition. Proceedings. 2003 IEEE Computer Society Conference on, vol. 2, June 2003, pp. II-351-8 (2003)

  26. Tresadern, P.A., Reid, I.D.: Video synchronization from human motion using rank constraints. Comput. Vis. Image Underst. 113(8), 891–906 (2009). doi:10.1016/j.cviu.2009.03.012

    Article  Google Scholar 

  27. Hasler, N., Rosenhahn, B., Thormahlen, T., Wand, M., Gall, J., Seidel, H.-P.: Markerless motion capture with unsynchronized moving cameras. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 224–231 (2009)

  28. Y. K. N. I. of Technology, F. S. N. I. of Technology, J. S. N. I. of Technology, K. I. N. S. Inc, M. I. D. Corporation: High frequency 3d reconstruction from unsynchronized multiple cameras. In: Proceedings of the British Machine Vision Conference. BMVA Press (2013)

  29. Lei, C., Yang, Y.-H.: Tri-focal tensor-based multiple video synchronization with subframe optimization. IEEE Trans. Image Process. 15(9), 2473–2480 (2006)

    Article  Google Scholar 

  30. Caspi, Y., Irani, M.: Spatio-temporal alignment of sequences. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1409–1424 (2002)

    Article  Google Scholar 

  31. Liang, C.-K., Chang, L.-W., Chen, H.: Analysis and compensation of rolling shutter effect. IEEE Trans. Image Process. 17(8), 1323–1330 (2008)

    Article  MathSciNet  Google Scholar 

  32. Baker, S., Bennett, E., Kang, S.B., Szeliski, R.: Removing rolling shutter wobble. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. plus minus IEEE Computer Society, 2010. Available: http://research.microsoft.com/apps/pubs/default.aspx?id=121490

  33. URL: http://www.logitech.com/en-ca/product/hd-pro-webcam-c920

  34. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment: a modern synthesis. In: Proceedings of the International Workshop on Vision Algorithms: Theory and Practice, ser. ICCV ’99, pp. 298–372. Springer-Verlag, London (2000). Available: http://dl.acm.org/citation.cfm?id=646271.685629

  35. URL: http://ceres-solver.org/index.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rania Benrhaiem.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Benrhaiem, R., Roy, S. & Meunier, J. Achieving invariance to the temporal offset of unsynchronized cameras through epipolar point-line triangulation. Machine Vision and Applications 27, 545–557 (2016). https://doi.org/10.1007/s00138-016-0765-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-016-0765-7

Keywords

Navigation