Abstract
The supporting semantics plays a vital role in the robotic grasping task. However, it is difficult to identify the spatial relationship of the objects and grasp them orderly without falling. This paper presents a Visual Reasoning Grasp Framework (VRGF) to address the problems mentioned above. The VRGF consists of two parts: 3D semantic segmentation and Supporting Relationship Reasoning (SRR). We adopt the fused projection for 3D semantic segmentation, which can reduce the processing time and promote the performance of the reasoning framework. The SRR can distinguish how objects support each other in the stacking scenarios. We also design a complete robotic system to verify the robustness and generalization of the framework in the real world. The experiment results demonstrate the reliability and robustness of the VRGF’s grasping strategy for stacking objects.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dai, A., Nießner, M.: 3DMV: joint 3D-multi-view prediction for 3D semantic scene segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 452–468 (2018)
Feng, Q., Chen, Z., Deng, J., Gao, C., Zhang, J., Knoll, A.: Center-of-mass-based robust grasp planning for unknown objects using tactile-visual sensors. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 610–617. IEEE (2020)
Hodson, R.: How robots are grasping the art of gripping. Nature 557(7706), S23–S23 (2018)
Huynh, J.: Separating axis theorem for oriented bounding boxes (2009). https://jkh.me/files/tutorials/Separating%20Axis%20Theorem%20for%20Oriented%20Bounding%20Boxes.pdf
Jia, Z., Gallagher, A.C., Saxena, A., Chen, T.: 3D reasoning from blocks to stability. IEEE Trans. Pattern Anal. Mach. Intell. 37(5), 905–918 (2014)
Marton, Z.C., Goron, L., Rusu, R.B., Beetz, M.: Reconstruction and verification of 3D object models for grasping. In: Robotics Research, pp. 315–328. Springer, Cham (2011). https://doi.org/10.1007/978-3-642-19457-3_19
Mojtahedzadeh, R., Bouguerra, A., Schaffernicht, E., Lilienthal, A.J.: Support relation analysis and decision making for safe robotic manipulation tasks. Robot. Auton. Syst. 71, 99–117 (2015)
Panda, S., Hafez, A.A., Jawahar, C.: Learning support order for manipulation in clutter. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 809–815. IEEE (2013)
ten Pas, A., Gualtieri, M., Saenko, K., Platt, R.: Grasp pose detection in point clouds. Int. J. Robot. Res. 36(13–14), 1455–1473 (2017)
Peng, W., Ao, Y., He, J., Wang, P.: Vehicle odometry with camera-lidar-IMU information fusion and factor-graph optimization. J. Intell. Robot. Syst. 101(4), 1–13 (2021). https://doi.org/10.1007/s10846-021-01329-x
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413 (2017)
Qian, J., Weng, T., Okorn, B., Zhang, L.: Cloth region segmentation for robust grasp selection. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9553–9560 (2020)
Vo, A.V., Truong-Hong, L., Laefer, D.F., Bertolotto, M.: Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote. Sens. 104, 88–100 (2015)
Wu, B., Zhou, X., Zhao, S., Yue, X., Keutzer, K.: SqueezeSegV2: improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 4376–4382. IEEE (2019)
Xu, C., et al.: SqueezeSegV3: spatially-adaptive convolution for efficient point-cloud segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 1–19. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_1
Zhang, H., Lan, X., Wan, L., Yang, C., Zhou, X., Zheng, N.: RPRG: toward real-time robotic perception reasoning and grasping with one multi-task convolutional neural network, pp. 1–7. arXiv preprint arXiv:1809.07081 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ji, X., Chen, Q., Xiong, T., Xiong, T., Shang, H. (2022). VReason Grasp: An Ordered Grasp Based on Physical Intuition in Stacking Objects. In: Abraham, A., Gandhi, N., Hanne, T., Hong, TP., Nogueira Rios, T., Ding, W. (eds) Intelligent Systems Design and Applications. ISDA 2021. Lecture Notes in Networks and Systems, vol 418. Springer, Cham. https://doi.org/10.1007/978-3-030-96308-8_70
Download citation
DOI: https://doi.org/10.1007/978-3-030-96308-8_70
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-96307-1
Online ISBN: 978-3-030-96308-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)