Skip to main content
Log in

A new method to estimate the pose of an arbitrary 3D object without prerequisite knowledge: projection-based 3D perception

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Recognizing a target object and measuring its pose are important functions of robot vision. Most recognition methods require prerequisite information about the target object to conduct the pose estimation, which limits the usability of the robot vision. To overcome this issue, the authors proposed a new approach to estimate an arbitrary target’s pose using stereo-vision, which was inspired by the parallactic character in human perception. The authors continued the previous research presented in AROB 2020 and expanded the ability of projection-based 3D perception (Pb3DP). Through tracking the trajectory of the target’s motion with a hand–eye robot, it has been confirmed that the Pb3DP method can provide a feasible result in the visual servoing of an unknown target object. In this paper, the authors introduce the methodology of the Pb3DP approach in detail and show the effectiveness of the method through the experimental results of visual servoing in 6 DoF using a stereo-vision hand–eye robot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Allen PK, Timcenko A, Yoshimi B, Michelman P (1993) Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Trans Robot Autom 9(2):152–165

    Article  Google Scholar 

  2. Chaumette F (1998) Potential problems of stability and convergence in image-based and position-based visual servoing. The confluence of vision and control. Springer, London, pp 66–78

    Google Scholar 

  3. Chaumette F, Malis E (2000) 2 1/2 D visual servoing: a possible solution to improve image-based and position-based visual servoings. Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065) 1: 630-635

  4. Tian H, Kou Y, Phyu K W, Funakobo R. and Minami M (2018) Visual Servoing to Arbitrary Target by Using Photo-Model Definition. In The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec): pp 2A1-M17

  5. Tian H, Kou Y, Minami M (2019) Visual servoing to arbitrary target with photo-model-based recognition method. 24th International Symposium on Artificial Life and Robotics: pp 950-955

  6. Tian H, Kou Y, Li X, Minami M (2020) Real-time pose tracking of 3D targets by photo-model-based stereo-vision. J Adv Mech Design Syst Manuf 14(4):JAMDSM0057

    Article  Google Scholar 

  7. Tian H, Kou Y, Kawakami T, Takahashi R, Minami M (2019) Photo-model-based stereo-vision 3D perception for marine creatures catching by ROV. Oceans 2019:1–6

    Google Scholar 

  8. Barbosa GB, Da Silva E C, Leite AC (2021) Robust Image-based Visual Servoing for Autonomous Row Crop Following with Wheeled Mobile Robots. In 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE): pp 1047-1053

  9. Sharma RS, Shukla S, Beher L, Subramanian VK (2020) Position-based visual servoing of a mobile robot with an automatic extrinsic calibration scheme. Robotica 38(5):831–844

    Article  Google Scholar 

  10. He Z, Wu C, Zhang S, Zhao X (2018) Moment-based 2.5-D visual servoing for textureless planar part grasping. IEEE Transactions on Industrial Electronics 66(10): pp 7821-7830

  11. Zeng X, Gao Y, Hou S, Peng S (2015) Real-time multi-scale tracking via online RGB-D multiple instance learning. J Softw 10(11):1235–1244

    Article  Google Scholar 

  12. Susperregi L, Martínez-Otzeta JM, Ansuategui A, Ibarguren A, Sierra B (2013) RGB-D, laser and thermal sensor fusion for people following in a mobile robot. Int J Adv Robot Syst 10(6):271

    Article  Google Scholar 

  13. Kawakami T, Takahashi R, Tian H, Kou Y, Minami M (2020) Real-time Spatial Recognition by Underwater Stereo Vision. 25th International Symposium on Artificial Life and Robotics: pp 837-842

  14. Gupta M, Yin Q, Nayar SK (2013) Structured light in sunlight. In Proceedings of the IEEE International Conference on Computer Vision: pp 545-552

  15. Yejun K, Hongzhi T, Mamoru M (2020) A Realtime 3D Pose Estimation Method towards Arbitrary Target with Stereo Vision. 25th International Symposium on Artificial Life and Robotics: 831-836

  16. Lwin KN, Myint M, Mukada N, Yamada D, Matsuno T, Saitou K, Minami M (2019) Sea docking by dual-eye pose estimation with optimized genetic algorithm parameters. J Intell Robot Syst 96(2):245–266

    Article  Google Scholar 

  17. Tian H, Kou Y, Li X, Minami M (2020) Real-time pose tracking of 3D targets by photo-model-based stereo-vision. J Adv Mech Design Sys Manuf 14(4):JAMDSM0057

    Article  Google Scholar 

  18. Minami M, Zhu J, Miura M (2003) Real-time Evolutionary Recognition of Human with Adaptation to Environmental Condition, The Second International Conference on Computational Intelligence, Robotics and Autonomous Systems (CIRAS), Proceeding CD-ROM PS010103

  19. Song W, Minami M, Mae Y, Aoyagi S (2007) On-line evolutionary head pose measurement by feedforward stereo model matching. In Proceedings 2007 IEEE International Conference on Robotics and Automation: pp 4394-4400

  20. Myint M, Lwin KN, Mukada N, Yamada D, Matsuno T, Toda Y, Minami M (2019) Experimental verification of turbidity tolerance of stereo-vision-based 3D pose estimation system. J Marine Sci Technol 24(3):756–779

    Article  Google Scholar 

  21. Kou Y, Tian H, Minami M, Matsuno T (2018) Improved eye-vergence visual servoing system in longitudinal direction with RM-GA. Artif Life Robot 23(1):131–139

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yejun Kou.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was presented in part at the 26th International Symposium on Artificial Life and Robotics (Online, January 21–23, 2021).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kou, Y., Toda, Y. & Minami, M. A new method to estimate the pose of an arbitrary 3D object without prerequisite knowledge: projection-based 3D perception. Artif Life Robotics 27, 149–158 (2022). https://doi.org/10.1007/s10015-021-00718-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-021-00718-7

Keywords

Navigation