Abstract
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
Similar content being viewed by others
References
Qian, K., Liu, H., Valls miro, J., Jing, X., Zhou, B.: Hierarchical and parameterized learning of pick-and-place manipulation from under-specified human demonstrations. Adv. Robot. 34(13), 858–872 (2020)
Xu, X., Qian, K., Zhou, B., Chen, S., Li, Y.: Two-stream 2d/3d residual networks for learning robot manipulations from human demonstration videos. In: International Conference on Robotics and Automation (ICRA), pp. 3353–3358. IEEE (2021)
Srull, T.K.: The role of prior knowledge in the acquisition, retention, and use of new information. ACR North American Advances (1983)
Jing, X., Qian, K., Xu, X., Bai, J., Zhou, B.: Domain adversarial transfer for cross-domain and task-constrained grasp pose detection. Robot. Auton. Syst. 145, 103872 (2021)
Qian, K., Jing, X., Duan, Y., Zhou, B., Fang, F., Xia, J., Ma, X.: Grasp pose detection with affordance-based task constraint learning in single-view point clouds. J. Intell. Robot. Syst. 100, 145–163 (2020)
Song, D., Huebner, K., Kyrki, V., Kragic, D.: Learning task constraints for robot grasping using graphical models. In: International Conference on Intelligent Robots and Systems, pp. 1579–1585. IEEE/RSJ (2010)
Vezzani, G., Pattacini, U., Natale, L.: A grasping approach based on superquadric models. In: International Conference on Robotics and Automation (ICRA), pp. 1579–1586. IEEE (2017)
Manfrè, A., Infantino, I., Augello, A., Pilato, G., Vella, F.: Learning by demonstration for a dancing robot within a computational creativity framework. In: First IEEE International Conference on Robotic Computing (IRC), pp. 434–439 (2017)
Bianchi, M., Salaris, P., Bicchi, A.: Synergy-based optimal design of hand pose sensing. In: International Conference on Intelligent Robots and Systems, pp. 3929–3935. IEEE/RSJ (2012)
Usabiaga, J., Erol, A., Bebis, G., Boyle, R., Twombly, X.: Global hand pose estimation by multiple camera ellipse tracking. Mach. Vis. Appl. 21(1), 1–15 (2009)
Zhang, Q.-y., Zhang, M.-Y., Hu, J.-Q.: A method of hand gesture segmentation and tracking with appearance based on probability model. In: Second International Symposium on Intelligent Information Technology Application, vol. 1. pp. 380–383 (2008)
Zhang, Q.-Y, Zhang, M.-Y., Hu, J.-Q.: Hand gesture contour tracking based on skin color probability and state estimation model. J. Multimed. vol. 4(6) (2009)
Romero, J., Kjellström, H., Kragic, D.: Hands in action: real-time 3d reconstruction of hands in interaction with objects. In: International Conference on Robotics and Automation, pp. 458–463. IEEE (2010)
Menon, C., et al.: Tracking hand movements and detecting grasp. In: Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE (2016)
John, V., Boyali, A., Mita, S., Imanishi, M., Sanma, N.: Deep learning-based fast hand gesture recognition using representative frames. In: International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8 (2016)
Oikonomidis, I., Kyriazis, N., Argyros, A.A.: Efficient model-based 3d tracking of hand articulations using Kinect. In: BmVC, vol. 1. pp. 3 (2011)
Oikonomidis, I., Kyriazis, N., Argyros, A.A.: Full Dof tracking of a hand interacting with an object by modeling occlusions and physical constraints. In: International Conference on Computer Vision, pp. 2088–2095 (2011)
Poier, G., Roditakis, K., Schulter, S., Michel, D., Bischof, H., Argyros, A.A.: Hybrid one-shot 3d hand pose estimation by exploiting uncertainties. arXiv:1510.08039 (2015)
Panteleris, P., Argyros, A.: Back to rgb: 3d tracking of hands and hand-object interactions based on short-baseline stereo. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 575–584 (2017)
Rahmatizadeh, R., Abolghasemi, P., Behal, A., Bölöni, L.: From virtual demonstration to real-world manipulation using lstm and mdn. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Jain, D., Mosenlechner, L., Beetz, M.: Equipping robot control programs with first-order probabilistic reasoning capabilities. In: International Conference on Robotics and Automation, pp. 3626–3631. IEEE (2009)
Grimes, D.B., Rao, R.P.N.: Learning actions through imitation and exploration: towards humanoid robots that learn from humans. In: Sendhoff, B., Körner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-Like Intelligence: From Basic Principles to Complex Intelligent Systems pp. 103–138. Springer, Berlin, Heidelberg (2009)
Ferrari, C., Canny, J.F.: Planning Optimal Grasps. In: International Conference on Robotics and Automation (ICRA), vol. 3. pp. 6. IEEE (1992)
Nguyen, V.-D.: Constructing force-closure grasps. Int. J. Rob. Res 7(3), 3–16 (1988)
Pokorny, F.T., Kragic, D.: Classical grasp quality evaluation: new algorithms and theory. In: International Conference on Intelligent Robots and Systems, pp. 3493–3500. IEEE/RSJ (2013)
Vohra, M., Prakash, R., Behera, L.: Real-time grasp pose estimation for novel objects in densely cluttered environment. In: 28Th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–6 (2019)
Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis—a survey. IEEE Trans. Robot. 30(2), 289–309 (2013)
Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Rob. Res. 27(2), 157–173 (2008)
Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv:1703.09312 (2017)
Mahler, J., Goldberg, K.: Learning deep policies for robot bin picking by simulating robust grasping sequences. In: Conference on Robot Learning, pp. 515–524. PMLR (2017)
Vezzani, G., Pattacini, U., Pasquale, G., Natale, L.: Improving superquadric modeling and grasping with prior on object shapes. In: International Conference on Robotics and Automation (ICRA), pp. 6875–6882. IEEE (2018)
Chevalier, L., Jaillet, F., Baskurt, A.: Segmentation and superquadric modeling of 3D objects. In: Journal of Winter School of Computer Graphics, WSCG’03, Pilsen, Rep. Tchèque, 2003, Plzen, Czech Republic, p. 00 (2003)
Miller, A.T., Allen, P.K.: Graspit! a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004)
Chen, S.H., Pollino, C.A.: Good practice in bayesian network modelling. Environ. Model. Softw. 37, 134–145 (2012)
Ankan, A., Panda, A.: Pgmpy: probabilistic graphical models using python. In: Proceedings of the 14th Python in Science Conference (SCIPY 2015), vol. 10. Citeseer (2015)
Rusu, R.B., Cousins, S.: 3D is here: point cloud library (Pcl). In: International Conference on Robotics and Automation, pp. 1–4. IEEE (2011)
Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)
Funding
This work is sponsored by the Natural Science Foundation of Jiangsu Province (No.BK20201264), Zhejiang Lab (No.2022NB0AB02), and the National Natural Science Foundation of China (No.61573101, 62073075).
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Yinghui Liu, Kun Qian, Xin Xu. The first draft of the manuscript was written by Yinghui Liu and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Conceptualization: Kun Qian; Methodology: Yinghui Liu, Xin Xu; Formal analysis and investigation: Yinghui Liu, Kun Qian; Writing - original draft preparation: Yinghui Liu; Writing - review and editing: Yinghui Liu, Kun Qian; Funding acquisition: Kun Qian, Bo Zhou; Resources: Kun Qian, Fang Fang; Supervision: Kun Qian.
Corresponding author
Ethics declarations
Conflict of Interests
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, Y., Qian, K., Xu, X. et al. Grasp Pose Learning from Human Demonstration with Task Constraints. J Intell Robot Syst 105, 37 (2022). https://doi.org/10.1007/s10846-022-01650-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-022-01650-z