Skip to main content

Viewing Angle Generative Model forĀ 7-DoF Robotic Grasping

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13070))

Included in the following conference series:

Abstract

Grasping is the first step in most robotic manipulation tasks, and it is essential for applications of robots in real-life scenarios. For humans, grasping novel objects is a naturally gained ability, however, for robots, it is a challenging task due to complex object shapes and incomplete visual information. Many current grasp pose estimation methods need to first construct 3D models of the scene and generates a large pool of grasp candidates, and then perform a search for the best grasp. These methods rely on high quality 3D models, and their long pipeline makes them unfeasible for real-time processing. End-to-end grasp pose estimation methods mitigate these issues, but they can only deals with few DoF planar grasps that fail to cover many successful grasps. In this paper, we propose a viewing angle generative network (VAGN), an approach that bridges the aforementioned two main classes of methods. VAGN decouples 7-DoF grasp detection into two stages. In the first stage, it predicts the camera viewing angle, which is also the orientation of the gripper around the object from an RGBD frame. In the second stage, it generates a planar grasp pose by taking another RGBD image at the predicted viewing angle in stage 1. We trained VAGN on the Cornell dataset. Real robot experiments on a UR-10e robot with camera-in-hand show real-time processing speed and higher success rates compared to the state-of-the-art GR-ConvNet, in both single object scenes and cluttered scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bicchi, A., Kumar, V.: Robotic grasping and contact: a review. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation, vol. 1, pp. 348ā€“353 (2000)

    Google ScholarĀ 

  2. Chu, F., Xu, R., Vela, P.A.: Real-world multiobject, multigrasp detection. IEEE Robot. Autom. Lett. 3(4), 3355ā€“3362 (2018)

    ArticleĀ  Google ScholarĀ 

  3. Depierre, A., Dellandrea, E., Chen, L.: Jacquard: a large scale dataset for robotic grasp detection. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3511ā€“3516 (2018)

    Google ScholarĀ 

  4. Detry, R., et al.: Learning object-specific grasp affordance densities. In: 2009 IEEE 8th International Conference on Development and Learning, pp. 1ā€“7 (2009)

    Google ScholarĀ 

  5. Fang, H.S., Wang, C., Gou, M., Lu, C.: Graspnet-1billion: A large-scale benchmark for general object grasping. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11444ā€“11453 (2020)

    Google ScholarĀ 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770ā€“778 (2016)

    Google ScholarĀ 

  7. Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.: Learning of grasp selection based on shape-templates. Autonomous Robots 36, January 2014

    Google ScholarĀ 

  8. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)

    Google ScholarĀ 

  9. Kumra, S., Joshi, S., Sahin, F.: Antipodal robotic grasping using generative residual convolutional neural network. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2020

    Google ScholarĀ 

  10. Morrison, D., Corke, P., Leitner, J.: Learning robust, real-time, reactive robotic grasping. Int. J. Robot. Res. 39(2ā€“3), 183ā€“201 (2020)

    ArticleĀ  Google ScholarĀ 

  11. Mousavian, A., Eppner, C., Fox, D.: 6-dof graspnet: variational grasp generation for object manipulation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2901ā€“2910, May 2019

    Google ScholarĀ 

  12. Ni, P., Zhang, W., Zhu, X., Cao, Q.: Pointnet++ grasping: learning an end-to-end spatial grasp generation algorithm from sparse point clouds. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 3619ā€“3625, March 2020

    Google ScholarĀ 

  13. ten Pas, A., Gualtieri, M., Saenko, K., Platt, R.: Grasp pose detection in point clouds. Int. J. Robot. Res. 36(13ā€“14), 1455ā€“1473 (2017)

    Google ScholarĀ 

  14. Peng, S., Zhou, X., Liu, Y., Lin, H., Huang, Q., Bao, H.: Pvnet: pixel-wise voting network for 6dof object pose estimation. IEEE Trans. Pattern Anal. Mach. Intell., 1 (2020)

    Google ScholarĀ 

  15. Yun Jiang, Moseson, S., Saxena, A.: Efficient grasping from rgbd images: Learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 3304ā€“3311 (2011)

    Google ScholarĀ 

  16. Zeng, A., et al.: Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1386ā€“1383 (2017)

    Google ScholarĀ 

  17. Zhou, X., Lan, X., Zhang, H., Tian, Z., Zhang, Y., Zheng, N.: Fully convolutional grasp detection network with oriented anchor box. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7223ā€“7230 (2018)

    Google ScholarĀ 

Download references

Acknowledgement

This study was supported by Jihua Laboratory through the Self-Programming Intelligent Robot Project (No. X190101TB190) and Funds for Young Scholar (No. X201181XB200), also by Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515110267).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gao, X., Li, W., Wen, Z. (2021). Viewing Angle Generative Model forĀ 7-DoF Robotic Grasping. In: Fang, L., Chen, Y., Zhai, G., Wang, J., Wang, R., Dong, W. (eds) Artificial Intelligence. CICAI 2021. Lecture Notes in Computer Science(), vol 13070. Springer, Cham. https://doi.org/10.1007/978-3-030-93049-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93049-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93048-6

  • Online ISBN: 978-3-030-93049-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics