Skip to main content

Visual Movement Prediction for Stable Grasp Point Detection

  • Conference paper
  • First Online:
Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference (EANN 2020)

Abstract

Robotic grasping of unknown objects in cluttered scenes is already well established, mainly based on advances in Deep Learning methods. A major drawback is the need for a big amount of real-world training data. Furthermore these networks are not interpretable in a sense that it is not clear why certain grasp attempts fail. To make the process of robotic grasping traceable and simplify the overall model we suggest to divide the complex task of robotic grasping into three simpler tasks to find stable grasp points. The first task is to find all grasp points where the gripper can be lowered onto the table without colliding with the object. The second task is to determine for the grasp points and gripper parameters from the first step how the object moves while the gripper is closed. Finally in the third step for all grasp points from the second step it is predicted whether the object slips out of the gripper during lifting. By this simplification it is possible to understand for each grasp point why it is stable and - just as important - why others are unstable or not feasible. In this study we focus on the second task, the prediction of the physical interaction between gripper and object while the gripper is closed. We investigate different Convolutional Neural Network (CNN) architectures and identify the architecture(s) that predict the physical interactions in image space best. We perform the experiments for training data generation in the robot and physics simulator V-REP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Caldera, S., Rassau, A., Chai, D.: Review of deep learning methods in robotic grasp detection. Multimodal Technol. Inter. 2(3), 57 (2018)

    Article  Google Scholar 

  2. Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)

    Article  Google Scholar 

  3. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: IEEE International Conference on Robotics and Automation (2015)

    Google Scholar 

  4. Levine, S., et al.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37(4–5), 421–436 (2017)

    Google Scholar 

  5. Finn, C., Goodfellow, I., Levine, S.: Unsupervised learning for physical interaction through video prediction. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 64–72 (2016)

    Google Scholar 

  6. Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. CoRR arXiv:1804.05172 (2018)

  7. Schenck, W., Hasenbein, H., Möller, R.: Detecting affordances by mental imagery. In: Proceedings of the SAB Workshop on Artificial Mental Imagery, pp. 15–18 (2012)

    Google Scholar 

  8. Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution - A Practical Approach to Global Optimization, 2nd edn, p. 110. Springer, Heidelberg (2005)

    MATH  Google Scholar 

  9. Bicchi, A., Kumar, V.: Robotic grasping and contact: A review. In: IEEE International Conference on Robotics and Automation, vol. 5499, pp. 348–353 (2000)

    Google Scholar 

  10. Oh, J., et al.: Action-conditional video prediction using deep networks in Atari games. In: Advances in Neural Information Processing Systems, pp. 2863–2871 (2015)

    Google Scholar 

  11. Bäuerle, A., Ropinski, T.: Net2Vis: transforming deep convolutional networks into publication-ready visualizations. CoRR arXiv:1902.04394 (2019)

  12. Mottaghi, R., et al.: Newtonian image understanding: unfolding the dynamics of objects in static images. CoRR arXiv:1511.04048 (2015)

  13. Mottaghi, R., et al.: “What happens if..." learning to predict the effect of forces in images. CoRR arXiv:1603.05600 (2016)

  14. He, K., Sun, J.: Convolutional neural networks at constrained time cost. CoRR arXiv:1412.1710 (2014)

  15. Copellia Robotics Homepage. http://www.coppeliarobotics.com. Accessed 26 Feb 2020

Download references

Acknowledgements

This work was supported by the EFRE-NRW funding programme “Forschungsinfrastrukturen” (grant no. 34.EFRE-0300119).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Constanze Schwan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schwan, C., Schenck, W. (2020). Visual Movement Prediction for Stable Grasp Point Detection. In: Iliadis, L., Angelov, P., Jayne, C., Pimenidis, E. (eds) Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference. EANN 2020. Proceedings of the International Neural Networks Society, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-030-48791-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48791-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48790-4

  • Online ISBN: 978-3-030-48791-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics