Skip to main content
Log in

A new monocular vision measurement method to estimate 3D positions of objects on floor

  • Research Article
  • Published:
International Journal of Automation and Computing Aims and scope Submit manuscript

Abstract

A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera’s extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera’s extrinsic parameters after the camera’s intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. G. N. Desouza, A. C. Kak. Vision for mobile robot navigation: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237–267, 2002.

    Article  Google Scholar 

  2. Q. Zhan, S. R. Huang, J. Wu. Automatic navigation for a mobile robot with monocular vision. In Proceedings of IEEE Conference on Robotics, Automation and Mechatronics, IEEE, Chengdu, China, pp. 1005–1010, 2008.

    Google Scholar 

  3. S. Se, D. G. Lowe, J. J. Little. Vision-based global localization and mapping for mobile robots. IEEE Transactions on Robotics, vol. 21, no. 3, pp. 364–375, 2005.

    Article  Google Scholar 

  4. D. Murray, J. J. Little. Using real-time stereo vision for mobile robot navigation. Autonomous Robots, vol. 8, no. 2, pp. 161–171, 2000.

    Article  Google Scholar 

  5. S. Nissimov, J. Goldberger, V. Alchanatis. Obstacle detection in a greenhouse environment using the Kinect sensor. Computers and Electronics in Agriculture, vol. 113, pp. 104–115, 2015.

    Article  Google Scholar 

  6. T. Goedemé, M. Nuttin, T. Tuytelaars, L. Van Gool. Omnidirectional vision based topological navigation. International Journal of Computer Vision, vol. 74, no. 3, pp. 219–236, 2007.

    Article  Google Scholar 

  7. H. N. Do, M. Jadaliha, J. Choi, C. Y. Lim. Feature selection for position estimation using an omnidirectional camera. Image and Vision Computing, vol. 39, pp. 1–9, 2015.

    Article  Google Scholar 

  8. Y. J.Yin, D. Xu, Z.T. Zhang, X.G. Wang, W.T. Qu. Plane measurement based on monocular vision. Journal of Electronic Measurement and Instrument, vol. 27, no. 4, pp. 347–352, 2013. (in Chinese)

    Article  Google Scholar 

  9. P. Zhao, Z. Q. Cao, N. Gu, C. Zhou, D. Xu, M. Tan. A coordinated docking approach based on embedded vision. International Journal of Robotics & Automation, vol. 31, no. 1, pp. 52–62, 2016.

    Google Scholar 

  10. G. Bresson, T. Féraud, R. Aufr`ere, P. Checchin, R. Chapuis. Real-time monocular SLAM with low memory requirements. IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 4, pp. 1827–1839, 2015.

    Article  Google Scholar 

  11. E. Eade, T. Drummond. Scalable monocular SLAM. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New York, USA, pp. 469–476, 2006.

    Google Scholar 

  12. A. J. Davison, I. D. Reid, N. D. Molton, O. Stasse. MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052–1067, 2007.

    Article  Google Scholar 

  13. E. Royer, M. Lhuillier, M. Dhome, J. M. Lavest. Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, vol. 74, no. 3, pp. 237–260, 2007.

    Article  MATH  Google Scholar 

  14. H. Lim, S. N. Sinha, M. F. Cohen, M. Uyttendaele, H. J. Kim. Real-time monocular image-based 6-DoF localization. The International Journal of Robotics Research, vol. 34, no. 4–5, pp. 476–492, 2015.

    Article  Google Scholar 

  15. Y. Lu, D. Z. Song. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Transactions on Robotics, vol. 31, no. 3, pp. 736–749, 2015.

    Article  Google Scholar 

  16. A. Milella, G. Reina, J. Underwood. A self-learning framework for statistical ground classification using radar and monocular vision. Journal of Field Robotics, vol. 32, no. 1, pp. 20–41, 2015.

    Article  Google Scholar 

  17. Y. Cao, L. C. Ma, W. G. Ma. Mobile target tracking based on hybrid open-loop monocular vision motion control strategy. Discrete Dynamics in Nature and Society, vol. 2015, Article number 690576, 2015.

    Google Scholar 

  18. P. De Cristóforis, M. Nitsche, T. Krajnik, T. Pire, M. Mejail. Hybrid vision-based navigation for mobile robots in mixed in-door/outdoor environments. Pattern Recognition Letters, vol. 53, pp. 118–128, 2015.

    Article  Google Scholar 

  19. X. H. Chen, Y. M. Jia. Adaptive leader-follower formation control of non-holonomic mobile robots using active vision. IET Control Theory & Applications, vol. 9, no. 8, pp. 1302–1311, 2015.

    Article  MathSciNet  Google Scholar 

  20. J. Campbell, R. Sukthankar, I. Nourbakhsh, A. Pahwa. A robust visual odometry and precipice detection system using consumer-grade monocular vision. In Proceedings of the IEEE International Conference on Robotics and Automation, IEEE, Barcelona, Spain, pp. 3421–3427, 2005.

    Google Scholar 

  21. J. Zhou, B. X. Li. Robust ground plane detection with normalized homography in monocular sequences from a robot platform. In Proceedings of the IEEE International Conference on Image Processing, IEEE, Atlanta, USA, pp. 3017–3020, 2006.

    Google Scholar 

  22. X. Y. Huang, F. Gao, G. Y. Xu, N. G. Ding, L. L. Xing. Depth information extraction of on-board monocular vision based on a single vertical target image. Journal of Beijing University of Aeronautics and Astronautics, vol. 41, no. 4, pp. 649–655, 2015. (in Chinese)

    Google Scholar 

  23. S. S. Qu, X. Chen, X. H. Wu, Q. Yang. A method for measuring the height and area based on distance estimation of monocular vision. Science Technology and Engineering, vol. 16, no. 2, pp. 224–228, 2016. (in Chinese)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi-Qiang Cao.

Additional information

This work was supported by National Natural Science Foundation of China (Nos. 61273352 and 61473295), National High Technology Research and Development Program of China (863 Program) (No. 2015AA042307), and Beijing Natural Science Foundation (No. 4161002).

Recommended by Associate Editor Xun Chen

Ling-Yi Xu received the B. Sc. degree in control theory and control engineering from the University of Science and Technology Beijing, China in 2010. Currently, she is a master student in control theory and control engineering at the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, China.

Her research interest is visual measurement for robots.

ORCID iD: 0000-0003-2984-7849

Zhi-Qiang Cao received the B. Sc.M. S. degrees from Shandong University of Technology, China in 1996 and 1999, respectively. In 2002, he received the Ph.D. degree in control theory and control engineering from Institute of Automation, Chinese Academy of Sciences, China. He is currently a professor in the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, China.

His research interests include environmental cognition, robot control and multi-robot coordination.

ORCID iD: 0000-0003-1801-3363

Peng Zhao received the B. Sc. degree in mechanical design and automation science from Beijing Information Science and Technology University, China in 2010. He received the Ph.D. degree in control theory and control engineering at the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, China in 2015.

His research interests include multi-robot system and visual servoing.

Chao Zhou received the B. Sc. degree (Hons.) in automation from southeast University, China in 2003, and the Ph.D. degree in control theory and control engineering from the Institute of Automation, Chinese Academy of Sciences, China in 2008. He is currently an associate professor in the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, China.

His research interests include motion control of robots and bio-inspired robotic fish.

ORCID iD: 0000-0003-4461-8075

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, LY., Cao, ZQ., Zhao, P. et al. A new monocular vision measurement method to estimate 3D positions of objects on floor. Int. J. Autom. Comput. 14, 159–168 (2017). https://doi.org/10.1007/s11633-016-1047-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11633-016-1047-6

Keywords

Navigation