Abstract
Programming by demonstration empowers robot operators to program robots without coding. This approach finds particular relevance in scenarios involving repetitive assembly processes composed of similar components arranged in patterns. In industrial environments with limited CAD data, like SMEs, point cloud data from scans serves as a practical starting point for further investigations. Recognizing component patterns within point clouds and analyzing demonstrated actions in specific areas are vital for guiding robots in automation tasks.
Our work introduces innovative methods designed to identify patterns within point clouds, enabling the efficient analysis of demonstrated actions and their subsequent execution by robotic systems. A primary focus of our approach is the computation of similarities within point clouds, a crucial step in pinpointing regions suitable for repetitive actions. We are particularly intrigued by primitive-based patterns, as these representations align closely with industrial object surfaces. Given the prevalence of geometric representations in industry, recognizing recurring geometries through primitive-based descriptions represents a pertinent research objective. We present the problem within a practical application context and explore the feasibility of comprehensively identifying recurrent geometries using primitive-based features.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Qi, C.R., et al.: Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Shi, C., et al.: Keypoint matching for point cloud registration using multiplex dynamic graph attention networks. IEEE Rob. Autom. Lett. 2, 8 (2021)
Sharma, G., Liu, D., Maji, S., Kalogerakis, E., Chaudhuri, S., Měch, R.: ParSeNet: a parametric surface fitting network for 3D point clouds. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 261–276. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_16
Zhao, H., et al.: Point transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16259–16268 (2021)
Denitto, M., et al.: Region-based correspondence between 3d shapes via spatially smooth biclustering. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4270–4279 (2017)
Yan, S., et al.: Hpnet: deep primitive segmentation using hybrid representations. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2733–2742 (2021)
Choi, S., et al.: Robust reconstruction of indoor scenes. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5556–5565 (2015)
Acknowledgments
This work was accomplished within the AIT-Lighthouse project (Pillar 2) and the EFRE-FTI-DemoDatenPro project as well as financed by the Austrian Institute of Technology (AIT) and by research subsidies granted by the government of Upper Austria.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Möhl, P., Ikeda, M., Hofmann, M., Pichler, A. (2024). Exploitation of Similarities in Point Clouds for Simplified Robot Programming by Demonstration. In: Secchi, C., Marconi, L. (eds) European Robotics Forum 2024. ERF 2024. Springer Proceedings in Advanced Robotics, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-031-76424-0_42
Download citation
DOI: https://doi.org/10.1007/978-3-031-76424-0_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-76423-3
Online ISBN: 978-3-031-76424-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)