Skip to main content

The Sliced Pineapple Grid Feature for Predicting Grasping Affordances

  • Conference paper
  • First Online:
Book cover Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016)

Abstract

The problem of grasping unknown objects utilising vision is addressed in this work by introducing a novel feature, the Sliced Pineapple Grid Feature (SPGF). The SPGF encode semi-local surfaces and allows for distinguishing structures such as “walls”, “edges” and “rims”. These structures are shown to be important when learning successful grasping affordance predictions. The SPGF feature is used in combination with two different grasp affordance learning methods and achieve grasp success-rates of up to 87% for a combined varied object set. For specific object classes within the object set, success-rates of up to 96% is achieved. The results also show how two different grasp types can complement each other and allow grasping of objects that are not graspable by one of the types.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Krüger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., Rodríguez-Sánchez, A.J., Wiskott, L.: Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE PAMI 35, 1847–1871 (2013)

    Article  Google Scholar 

  2. Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theor. 28, 129–137 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Thomsen, M., Kraft, D., Krüger, N.: Identifying relevant feature-action associations for grasping unmodelled objects. Paladyn J. Behav. Robot. 6(1), 85–110 (2015)

    Google Scholar 

  4. Curtis, N., Xiao, J., Member, S.: Efficient and effective grasping of novel objects through learning and adapting a knowledge base. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2252–2257 (2008)

    Google Scholar 

  5. Huebner, K., Ruthotto, S., Kragic, D.: Minimum volume bounding box decomposition for shape approximation in robot grasping. In: IEEE International Conference on Robotics and Automation, ICRA 2008, pp. 1628–1633. IEEE (2008)

    Google Scholar 

  6. Detry, R., Ek, C.H., Madry, M., Kragic, D.: Learning a dictionary of prototypical grasp-predicting parts from grasping experience. In: IEEE International Conference on Robotics and Automation (2013)

    Google Scholar 

  7. Kopicki, M., Detry, R., Schmidt, F., Borst, C., Stolkin, R., Wyatt, J.L.: Learning dexterous grasps that generalise to novel objects by combining hand and contact models. In: IEEE International Conference on Robotics and Automation (2014, to appear)

    Google Scholar 

  8. Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Robot. Res. 27, 157–173 (2008)

    Article  Google Scholar 

  9. Kootstra, G., Popović, M., Jørgensen, J.A., Kuklinski, K., Miatliuk, K., Kragic, D., Krüger, N.: Enabling grasping of unknown objects through a synergistic use of edge and surface information. Int. J. Robot. Res. 31(10), 1190–1213 (2012)

    Article  Google Scholar 

  10. Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. CoRR (2013)

    Google Scholar 

  11. Jiang, Y., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: ICRA 2011, pp. 3304–3311 (2011)

    Google Scholar 

  12. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. CoRR abs/1412.3128 (2014)

    Google Scholar 

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  14. Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features. In: ICRA (2015)

    Google Scholar 

  15. Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis a survey. IEEE Trans. Robot. 30, 289–309 (2014)

    Article  Google Scholar 

  16. Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part i. IEEE Robot. Autom. Mag. 13, 99–110 (2006)

    Article  Google Scholar 

  17. Kraft, D., Mustafa, W., Popovic, M., Jessen, J.B., Buch, A.G., Savarimuthu, T.R., Pugeault, N., Krüger, N.: Using surfaces and surface relations in an early cognitive vision system. In: Computer Vision and Image Understanding (2014)

    Google Scholar 

  18. Wahl, E., Hillenbrand, U., Hirzinger, G.: Surflet-pair-relation histograms: a statistical 3D-shape representation for rapid classification. In: Proceedings of Fourth International Conference on 3-D Digital Imaging and Modeling, 3DIM 2003, pp. 474–481. IEEE (2003)

    Google Scholar 

  19. Mustafa, W., Pugeault, N., Krüger, N.: Multi-view object recognition using view-point invariant shape relations and appearance information. In: IEEE International Conference on Robotics and Automation (ICRA) (2013)

    Google Scholar 

  20. Freedman, D.A.: Statistical Models: Theory and Practice. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  21. Jørgensen, J.A., Ellekilde, L.P., Petersen, H.G.: RobWorkSim - an open simulator for sensor based grasping. In: Robotics (ISR), 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), pp. 1–8 (2010)

    Google Scholar 

  22. Kasper, A., Xue, Z., Dillmann, R.: The KIT object models database: an object model database for object recognition, localization and manipulation in service robotics. Int. J. Robot. Res. 31(8), 927–934 (2012)

    Article  Google Scholar 

  23. Archive3D: Archive3D free online cad model database. http://www.archive3d.net

Download references

Acknowledgement

The research leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2013 (Specific Programme Cooperation, Theme 3, Information and Communication Technologies) under grant agreement no. 270273, Xperience.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikkel Tang Thomsen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Thomsen, M.T., Kraft, D., Krüger, N. (2017). The Sliced Pineapple Grid Feature for Predicting Grasping Affordances. In: Braz, J., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2016. Communications in Computer and Information Science, vol 693. Springer, Cham. https://doi.org/10.1007/978-3-319-64870-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64870-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64869-9

  • Online ISBN: 978-3-319-64870-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics