Skip to main content

Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10894))

Abstract

Visual servoing allows to control the motion of a robot using information from its visual sensors to achieve manipulation tasks. In this work we design and implement a robust visual servoing framework for reaching and grasping behaviours for a humanoid service robot with limited control capabilities. Our approach successfully exploits a 5-degrees of freedom manipulator, overcoming the control limitations of the robot while avoiding singularities and stereo vision techniques. Using a single camera, we combine a marker-less model based tracker for the target object, a pattern tracking for the end-effector to deal with the robot’s inaccurate kinematics, and alternate pose based visual servo technique with eye-in-hand and eye-to-hand configurations to achieve a fully functional grasping system. The overall method shows better results for grasping than conventional motion planing and simple inverse kinematics techniques for this robotic morphology, demonstrating a 48.8% of increment in the grasping success rate.

Thanks to Giovanni Claudio for his help on the use of ViSP and bridging Pepper robot with ROS.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://bitbucket.org/paolaArdon/master_thesis_vs_pepper.

References

  1. Schack, T., Ritter, H.: The cognitive nature of action functional links between cognitive psychology, movement science, and robotics. Prog. Brain Res. 174, 231–250 (2009)

    Article  Google Scholar 

  2. Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics, pp. 313–326. IEEE (1992)

    Google Scholar 

  3. Siciliano, B., Khatib, O.: Springer Handbook of Robotics. Springer Science & Business Media, Heidelberg (2008). https://doi.org/10.1007/978-3-540-30301-5

    Book  MATH  Google Scholar 

  4. Aldebaran cartesian control. http://www.bx.psu.edu/~thanh/naoqi/naoqi/motion/control-cartesian.html. Accessed 03 Feb 2017

  5. Aldebaran aldebaran - pepper robot specifications. http://doc.aldebaran.com/2-0/family/juliette_technical/. Accessed 05 May 2017

  6. Lippiello, V., Ruggiero, F., Siciliano, B., Villani, L.: Preshaped visual grasp of unknown objects with a multi-fingered hand. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5894–5899. IEEE (2010)

    Google Scholar 

  7. Corke, P., Good, M.: Controller design for high-performance visual servoing. IFAC Proc. 26(2), 629–632 (1993)

    Article  Google Scholar 

  8. Rizzi, A.A., Koditschek, D.E.: An active visual estimator for dexterous manipulation. IEEE Trans. Robot. Autom. 12(5), 697–713 (1996)

    Article  Google Scholar 

  9. Horaud, R., Dornaika, F., Espiau, B.: Visually guided object grasping. IEEE Trans. Rob. Autom. 14(4), 525–532 (1998)

    Article  Google Scholar 

  10. Kraft, D., Detry, R., Pugeault, N., Baseski, E., Piater, J.H., Kruger, N.: Learning objects and grasp affordances through autonomous exploration. In: ICVS (2009)

    Google Scholar 

  11. Macura, Z., Cangelosi, A., Ellis, R., Bugmann, D., Fischer, M.H., Myachykov, A.: A cognitive robotic model of grasping (2009)

    Google Scholar 

  12. Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Rob. Res. 0278364917710318 (2016)

    Google Scholar 

  13. Morales, A., Chinellato, E., Fagg, A.H., Pobil, A.P.D.: An active learning approach for assessing robot grasp reliability. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566) (2004)

    Google Scholar 

  14. Vicente, P., Jamone, L., Bernardino, A.: Towards markerless visual servoing of grasping tasks for humanoid robots. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3811–3816. IEEE (2017)

    Google Scholar 

  15. Allen, P.K., Yoshimi, B., Timcenko, A.: Real-time visual servoing. In: 1991 IEEE International Conference on Robotics and Automation, Proceedings, pp. 851–856. IEEE (1991)

    Google Scholar 

  16. Chaumette, F., Marchand, É.: A redundancy-based iterative approach for avoiding joint limits: application to visual servoing. IEEE Trans. Robot. Autom. 17(5), 719–730 (2001)

    Article  Google Scholar 

  17. Mansard, N., Stasse, O., Chaumette, F., Yokoi, K.: Visually-guided grasping while walking on a humanoid robot. In: 2007 IEEE International Conference on Robotics and Automation, pp. 3041–3047. IEEE (2007)

    Google Scholar 

  18. Vahrenkamp, N., Wieland, S., Azad, P., Gonzalez-Aguirre, D.I., Asfour, T., Dillmann, R.: Visual servoing for humanoid grasping and manipulation tasks. In: Humanoids (2008)

    Google Scholar 

  19. Claudio, G., Spindler, F., Chaumette, F.: Vision-based manipulation with the humanoid robot Romeo. In: Humanoids (2016)

    Google Scholar 

  20. Taylor, G., Kleeman, L.: Grasping unknown objects with a humanoid robot (2002)

    Google Scholar 

  21. Marey, M., Chaumette, F.: A new large projection operator for the redundancy framework. In: 2010 IEEE International Conference on Robotics and Automation, pp. 3727–3732 (2010)

    Google Scholar 

  22. Inria peppercontrol. https://github.com/lagadic/pepper_control. Accessed 03 Feb 2017

  23. ROS ros.org. http://wiki.ros.org/. Accessed 02 Feb 2017

  24. ROS naoqi driver. http://wiki.ros.org/naoqi_driver. Accessed 03 Feb 2017

  25. Inria visp naoqi bridge. http://jokla.me/software/visp_naoqi/. Accessed 01 Feb 2017

  26. Irse whycon. https://github.com/lrse/whycon. Accessed 02 Feb 2017

  27. ROS vision visp. https://github.com/lagadic/vision_visp. Accessed 03 Feb 2017

  28. QRCode optical flow. http://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html. Accessed 03 Feb 2017

  29. OpenCV opencv team. http://opencv.org/. Accessed 02 Feb 2017

  30. Kato, Y., Deguchi, D., Takahashi, T., Ide, I., Murase, H.: Low resolution QR-code recognition by applying super-resolution using the property of QR-codes. In: ICDAR (2011)

    Google Scholar 

  31. Belussi, L., Hirata, N.S.T.: Fast QR code detection in arbitrarily acquired images. In: SIBGRAPI (2011)

    Google Scholar 

  32. Nitsche, M., Krajnik, T., vCizek, P., Mejail, M., Duckett, T.: Whycon: an efficient, marker-based localization system (2015)

    Google Scholar 

  33. INRIA whycon tracking. https://github.com/lagadic/pepper_hand_pose. Accessed 03 Feb 2017

  34. Comport, A.I., Marchand, É., Pressigout, M., Chaumette, F.: Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Trans. Vis. Comput. Graph. 12, 615–628 (2006)

    Article  Google Scholar 

  35. INRIA visp edge tracking. http://visp-doc.inria.fr/manual/visp-2.6.0-tracking-overview. Accessed 03 Feb 2017

  36. Inria visp. https://visp.inria.fr/. Accessed 03 Feb 2017

  37. ROS moveit simpple grasps. https://github.com/davetcoleman/moveit_simple_grasps/. Accessed 28 Apr 2017

  38. Aldebaran movement detection. http://doc.aldebaran.com/2-4/naoqi/vision/almovementdetection.html#almovementdetection. Accessed 13 Apr 2017

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paola Ardón .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ardón, P., Dragone, M., Erden, M.S. (2018). Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing. In: Prattichizzo, D., Shinoda, H., Tan, H., Ruffaldi, E., Frisoli, A. (eds) Haptics: Science, Technology, and Applications. EuroHaptics 2018. Lecture Notes in Computer Science(), vol 10894. Springer, Cham. https://doi.org/10.1007/978-3-319-93399-3_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93399-3_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93398-6

  • Online ISBN: 978-3-319-93399-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics