Skip to main content

Large-Parallax Multi-camera Calibration Method for Indoor Wide-Baseline Scenes

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14268))

Included in the following conference series:

  • 427 Accesses

Abstract

Multi-camera calibration plays a crucial role in enabling efficient vision-based human-robot interaction and applications. This paper introduces a novel approach for multi-camera calibration, specifically tailored for indoor wide-baseline scenes, by leveraging 3D models. The task of multi-camera calibration in such scenarios is particularly challenging due to significant variations in camera perspectives. The limited and distant common view area of multiple cameras further exacerbates the difficulty in performing feature point matching. Traditional multi-camera calibration methods rely on matching 2D feature points, such as structure-from-motion, or employing large-area common view calibration plates, as seen in stereo camera calibration. In contrast, our proposed method eliminates the need for calibration boards or feature point pairs. Instead, we calibrate the external parameters of the multi-camera system by computing the optimal vanishing point through the extraction of orthogonal parallel lines within each camera’s view. This approach begins by extracting orthogonal parallel lines from the image to establish an accurate indoor 3D model. Furthermore, we incorporate the easily obtainable camera height as a prior, enhancing the estimation of the transformation matrix among the cameras. Extensive experiments were conducted in both real and simulated environments to evaluate the performance of our method. The experimental results validate the superiority of our approach over manual marker-based structure-from-motion methods, establishing its effectiveness in multi-camera calibration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, S., Mierle, K., Others: Ceres solver. www.ceres-solver.org

  2. Antunes, M., Barreto, J.P.: A global approach for the detection of vanishing points and mutually orthogonal vanishing directions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)

    Google Scholar 

  3. Arth, C., Pirchheim, C., Ventura, J., Schmalstieg, D., Lepetit, V.: Instant outdoor localization and slam initialization from 2.5d maps. IEEE Trans. Visual. Comput. Graph. 21(11), 1309–1318 (2015). https://doi.org/10.1109/TVCG.2015.2459772

  4. Bao, S.Y., Savarese, S.: Semantic structure from motion. In: CVPR 2011, pp. 2025–2032 (2011). https://doi.org/10.1109/CVPR.2011.5995462

  5. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) Computer Vision - ECCV 2010, pp. 778–792 (2010)

    Google Scholar 

  6. Coughlan, J., Yuille, A.: Manhattan world: compass direction from a single image by Bayesian inference. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 941–947(1999). https://doi.org/10.1109/ICCV.1999.790349

  7. Dame, A., Prisacariu, V.A., Ren, C.Y., Reid, I.: Dense reconstruction using 3d object shape priors. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1288–1295 (2013). https://doi.org/10.1109/CVPR.2013.170

  8. DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 337–33712 (2018). https://doi.org/10.1109/CVPRW.2018.00060

  9. DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 337–33712 (2018). https://doi.org/10.1109/CVPRW.2018.00060

  10. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sattler, T.: D2-net: a trainable CNN for joint description and detection of local features. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  11. Jakubović, A., Velagić, J.: Image feature matching and object detection using brute-force matchers. In: 2018 International Symposium ELMAR, pp. 83–86 (2018). https://doi.org/10.23919/ELMAR.2018.8534641

  12. Kendall, A., Grimes, M.K., Cipolla, R.: Convolutional networks for real-time 6-dof camera relocalization. arXiv preprint arXiv:1505.07427 (2015)

  13. Laskar, Z., Melekhov, I., Kalia, S., Kannala, J.: Camera relocalization by computing pairwise relative poses using convolutional neural network. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 920–929 (2017). https://doi.org/10.1109/ICCVW.2017.113

  14. Li, H., Zhao, J., Bazin, J.C., Liu, Y.H.: Quasi-globally optimal and near/true real-time vanishing point estimation in manhattan world. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1503–1518 (2022). https://doi.org/10.1109/TPAMI.2020.3023183

    Article  Google Scholar 

  15. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  16. Lu, X., Yaoy, J., Li, H., Liu, Y., Zhang, X.: 2-line exhaustive searching for real-time vanishing point estimation in Manhattan world. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 345–353 (2017). https://doi.org/10.1109/WACV.2017.45

  17. Melekhov, I., Ylioinas, J., Kannala, J., Rahtu, E.: Image-based localization using hourglass networks. IEEE Computer Society (2017)

    Google Scholar 

  18. Naseer, T., Burgard, W.: Deep regression for monocular camera-based 6-dof global localization in outdoor environments. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1525–1530 (2017). https://doi.org/10.1109/IROS.2017.8205957

  19. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: an efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp. 2564–2571 (2011). https://doi.org/10.1109/ICCV.2011.6126544

  20. Yang, T.Y., Nguyen, D.K., Heijnen, H., Balntas, V.: Ur2kid: Unifying retrieval, keypoint detection, and keypoint description without local correspondence supervision (2020)

    Google Scholar 

  21. Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: LIFT: learned invariant feature transform. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 467–483. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_28

  22. Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 1, pp. 666–673 (1999). https://doi.org/10.1109/ICCV.1999.791289

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongchen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, D., Liu, J., Xu, X., Chen, Y., Hu, Q., Zhang, J. (2023). Large-Parallax Multi-camera Calibration Method for Indoor Wide-Baseline Scenes. In: Yang, H., et al. Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science(), vol 14268. Springer, Singapore. https://doi.org/10.1007/978-981-99-6486-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-6486-4_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-6485-7

  • Online ISBN: 978-981-99-6486-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics