Skip to main content
Log in

Grasp Pose Learning from Human Demonstration with Task Constraints

  • Short Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Qian, K., Liu, H., Valls miro, J., Jing, X., Zhou, B.: Hierarchical and parameterized learning of pick-and-place manipulation from under-specified human demonstrations. Adv. Robot. 34(13), 858–872 (2020)

    Article  Google Scholar 

  2. Xu, X., Qian, K., Zhou, B., Chen, S., Li, Y.: Two-stream 2d/3d residual networks for learning robot manipulations from human demonstration videos. In: International Conference on Robotics and Automation (ICRA), pp. 3353–3358. IEEE (2021)

  3. Srull, T.K.: The role of prior knowledge in the acquisition, retention, and use of new information. ACR North American Advances (1983)

  4. Jing, X., Qian, K., Xu, X., Bai, J., Zhou, B.: Domain adversarial transfer for cross-domain and task-constrained grasp pose detection. Robot. Auton. Syst. 145, 103872 (2021)

    Article  Google Scholar 

  5. Qian, K., Jing, X., Duan, Y., Zhou, B., Fang, F., Xia, J., Ma, X.: Grasp pose detection with affordance-based task constraint learning in single-view point clouds. J. Intell. Robot. Syst. 100, 145–163 (2020)

    Article  Google Scholar 

  6. Song, D., Huebner, K., Kyrki, V., Kragic, D.: Learning task constraints for robot grasping using graphical models. In: International Conference on Intelligent Robots and Systems, pp. 1579–1585. IEEE/RSJ (2010)

  7. Vezzani, G., Pattacini, U., Natale, L.: A grasping approach based on superquadric models. In: International Conference on Robotics and Automation (ICRA), pp. 1579–1586. IEEE (2017)

  8. Manfrè, A., Infantino, I., Augello, A., Pilato, G., Vella, F.: Learning by demonstration for a dancing robot within a computational creativity framework. In: First IEEE International Conference on Robotic Computing (IRC), pp. 434–439 (2017)

  9. Bianchi, M., Salaris, P., Bicchi, A.: Synergy-based optimal design of hand pose sensing. In: International Conference on Intelligent Robots and Systems, pp. 3929–3935. IEEE/RSJ (2012)

  10. Usabiaga, J., Erol, A., Bebis, G., Boyle, R., Twombly, X.: Global hand pose estimation by multiple camera ellipse tracking. Mach. Vis. Appl. 21(1), 1–15 (2009)

    Article  Google Scholar 

  11. Zhang, Q.-y., Zhang, M.-Y., Hu, J.-Q.: A method of hand gesture segmentation and tracking with appearance based on probability model. In: Second International Symposium on Intelligent Information Technology Application, vol. 1. pp. 380–383 (2008)

  12. Zhang, Q.-Y, Zhang, M.-Y., Hu, J.-Q.: Hand gesture contour tracking based on skin color probability and state estimation model. J. Multimed. vol. 4(6) (2009)

  13. Romero, J., Kjellström, H., Kragic, D.: Hands in action: real-time 3d reconstruction of hands in interaction with objects. In: International Conference on Robotics and Automation, pp. 458–463. IEEE (2010)

  14. Menon, C., et al.: Tracking hand movements and detecting grasp. In: Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE (2016)

  15. John, V., Boyali, A., Mita, S., Imanishi, M., Sanma, N.: Deep learning-based fast hand gesture recognition using representative frames. In: International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8 (2016)

  16. Oikonomidis, I., Kyriazis, N., Argyros, A.A.: Efficient model-based 3d tracking of hand articulations using Kinect. In: BmVC, vol. 1. pp. 3 (2011)

  17. Oikonomidis, I., Kyriazis, N., Argyros, A.A.: Full Dof tracking of a hand interacting with an object by modeling occlusions and physical constraints. In: International Conference on Computer Vision, pp. 2088–2095 (2011)

  18. Poier, G., Roditakis, K., Schulter, S., Michel, D., Bischof, H., Argyros, A.A.: Hybrid one-shot 3d hand pose estimation by exploiting uncertainties. arXiv:1510.08039 (2015)

  19. Panteleris, P., Argyros, A.: Back to rgb: 3d tracking of hands and hand-object interactions based on short-baseline stereo. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 575–584 (2017)

  20. Rahmatizadeh, R., Abolghasemi, P., Behal, A., Bölöni, L.: From virtual demonstration to real-world manipulation using lstm and mdn. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

  21. Jain, D., Mosenlechner, L., Beetz, M.: Equipping robot control programs with first-order probabilistic reasoning capabilities. In: International Conference on Robotics and Automation, pp. 3626–3631. IEEE (2009)

  22. Grimes, D.B., Rao, R.P.N.: Learning actions through imitation and exploration: towards humanoid robots that learn from humans. In: Sendhoff, B., Körner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-Like Intelligence: From Basic Principles to Complex Intelligent Systems pp. 103–138. Springer, Berlin, Heidelberg (2009)

  23. Ferrari, C., Canny, J.F.: Planning Optimal Grasps. In: International Conference on Robotics and Automation (ICRA), vol. 3. pp. 6. IEEE (1992)

  24. Nguyen, V.-D.: Constructing force-closure grasps. Int. J. Rob. Res 7(3), 3–16 (1988)

    Article  Google Scholar 

  25. Pokorny, F.T., Kragic, D.: Classical grasp quality evaluation: new algorithms and theory. In: International Conference on Intelligent Robots and Systems, pp. 3493–3500. IEEE/RSJ (2013)

  26. Vohra, M., Prakash, R., Behera, L.: Real-time grasp pose estimation for novel objects in densely cluttered environment. In: 28Th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–6 (2019)

  27. Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis—a survey. IEEE Trans. Robot. 30(2), 289–309 (2013)

    Article  Google Scholar 

  28. Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Rob. Res. 27(2), 157–173 (2008)

    Article  Google Scholar 

  29. Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv:1703.09312 (2017)

  30. Mahler, J., Goldberg, K.: Learning deep policies for robot bin picking by simulating robust grasping sequences. In: Conference on Robot Learning, pp. 515–524. PMLR (2017)

  31. Vezzani, G., Pattacini, U., Pasquale, G., Natale, L.: Improving superquadric modeling and grasping with prior on object shapes. In: International Conference on Robotics and Automation (ICRA), pp. 6875–6882. IEEE (2018)

  32. Chevalier, L., Jaillet, F., Baskurt, A.: Segmentation and superquadric modeling of 3D objects. In: Journal of Winter School of Computer Graphics, WSCG’03, Pilsen, Rep. Tchèque, 2003, Plzen, Czech Republic, p. 00 (2003)

  33. Miller, A.T., Allen, P.K.: Graspit! a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004)

    Article  Google Scholar 

  34. Chen, S.H., Pollino, C.A.: Good practice in bayesian network modelling. Environ. Model. Softw. 37, 134–145 (2012)

    Article  Google Scholar 

  35. Ankan, A., Panda, A.: Pgmpy: probabilistic graphical models using python. In: Proceedings of the 14th Python in Science Conference (SCIPY 2015), vol. 10. Citeseer (2015)

  36. Rusu, R.B., Cousins, S.: 3D is here: point cloud library (Pcl). In: International Conference on Robotics and Automation, pp. 1–4. IEEE (2011)

  37. Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work is sponsored by the Natural Science Foundation of Jiangsu Province (No.BK20201264), Zhejiang Lab (No.2022NB0AB02), and the National Natural Science Foundation of China (No.61573101, 62073075).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Yinghui Liu, Kun Qian, Xin Xu. The first draft of the manuscript was written by Yinghui Liu and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Conceptualization: Kun Qian; Methodology: Yinghui Liu, Xin Xu; Formal analysis and investigation: Yinghui Liu, Kun Qian; Writing - original draft preparation: Yinghui Liu; Writing - review and editing: Yinghui Liu, Kun Qian; Funding acquisition: Kun Qian, Bo Zhou; Resources: Kun Qian, Fang Fang; Supervision: Kun Qian.

Corresponding author

Correspondence to Kun Qian.

Ethics declarations

Conflict of Interests

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y., Qian, K., Xu, X. et al. Grasp Pose Learning from Human Demonstration with Task Constraints. J Intell Robot Syst 105, 37 (2022). https://doi.org/10.1007/s10846-022-01650-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01650-z

Keywords

Navigation