Skip to main content

Advertisement

Log in

Position-aware feature matching algorithm based non-rigid point cloud registration

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Existing non-rigid point cloud registration algorithms often encounter matching errors arise when dealing with similar regions in point clouds, and it is difficult to promptly filter out these misaligned corresponding points. To address these issues, a non-rigid point cloud registration algorithm based on position-aware feature matching is proposed. Firstly, the relative positions of the point cloud are encoded using Fourier transform, decomposing the 3D point cloud of non-rigid objects into feature space information and 3D positional information. This enables non-rigid point cloud registration for similar parts while preventing the loss of positional information during network iterations. Secondly, global information is summarized through the self-attention layers of the transformation blocks, and point cloud information exchange is facilitated through cross-attention layers to promote feature matching between the source and target point clouds. Next, we design and integrate an outlier removal strategy into a high-dimensional convolutional neural network to eliminate incorrect matching correspondences. The Welsch function is applied in the regularization term of the loss function to enhance the algorithm’s robustness against noise and partially overlapping point clouds. Finally, comparative experiments with seven existing algorithms on the 4Dmatch/4Dlomatch dataset demonstrate that our proposed method outperforms the second-best algorithm by 3.3\(-\)6.3% in the correspondence index (IR) and 19.15\(-\)21.57% in the registration result index (Accr). The experimental results indicate that our method can effectively handle feature-similar regions in point clouds, promptly filter out misaligned corresponding points, and produce more accurate registration results, especially for lower overlap rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author, [RZ], upon reasonable request.

References

  1. Zou, J., Gao, B., Song, Y., Qin, J.: A review of deep learning-based deformable medical image registration. Front. Oncol. 12, 1047215 (2022)

    Article  MATH  Google Scholar 

  2. Park, D.M., Yeom, S.: Human detection and posture estimation with ir thermal images by a drone (2021)

  3. Best, P.J.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Vision 14 (1992)

  4. Makovetskii, A., Voronin, S., Kober, V., Voronin, A.: Coarse point cloud registration based on variational functionals. Mathematics 11(1), 35 (2022)

    Article  MATH  Google Scholar 

  5. Feydy, J., Séjourné, T., Vialard, F.-X., Amari, S.-i., Trouvé, A., Peyré, G.: Interpolating between optimal transport and mmd using sinkhorn divergences. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2681–2690. PMLR (2019)

  6. Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2262–2275 (2010)

    Article  MATH  Google Scholar 

  7. Xing, S., Jing, F., Tan, M.: Reghec: Hand-eye calibration via simultaneous multi-view point clouds registration of arbitrary object. arXiv preprint arXiv:2304.14092 (2023)

  8. Battikh, M.S., Hammill, D., Cook, M., Lensky, A.: knn-res: Residual neural network with knn-graph coherence for point cloud registration. arXiv preprint arXiv:2304.00050 (2023)

  9. Li, Y., Takehara, H., Taketomi, T., Zheng, B., Nießner, M.: 4dcomplete: Non-rigid motion estimation beyond the observable surface. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12706–12716 (2021)

  10. Li, Y., Harada, T.: Lepard: Learning partial point cloud matching in rigid and deformable scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5554–5564 (2022)

  11. Qin, Z., Yu, H., Wang, C., Peng, Y., Xu, K.: Deep graph-based spatial consistency for robust non-rigid point cloud registration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5394–5403 (2023)

  12. Yu, H., Qin, Z., Hou, J., Saleh, M., Li, D., Busam, B., Ilic, S.: Rotation-invariant transformer for point cloud matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5384–5393 (2023)

  13. Wu, Y., Zhang, Y., Ma, W., Gong, M., Fan, X., Zhang, M., Qin, A., Miao, Q.: Rornet: Partial-to-partial registration network with reliable overlapping representations. IEEE Trans. Neural Netw. Learn. Syst. (2023)

  14. Liu, X., Qi, C.R., Guibas, L.J.: Flownet3d: Learning scene flow in 3d point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 529–537 (2019)

  15. Habermann, M., Xu, W., Zollhoefer, M., Pons-Moll, G., Theobalt, C.: A deeper look into deepcap. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

  16. Gao, T., Lan, C., Huang, W., Wang, L., Wei, Z., Yao, F.: Multi-scale template matching for multimodal remote sensing image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. (2023)

  17. Gao, T., Lan, C., Wang, L., Huang, W., Yao, F., Wei, Z.: Leveraging cnns for panoramic image matching based on improved cube projection model. Remote Sensing 15(13), 3411 (2023)

    Article  Google Scholar 

  18. Xia, Y., Xu, Y., Li, S., Wang, R., Du, J., Cremers, D., Stilla, U.: Soe-net: A self-attention and orientation encoding network for point cloud based place recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11348–11357 (2021)

  19. Misra, I., Girdhar, R., Joulin, A.: An end-to-end transformer model for 3d object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2906–2917 (2021)

  20. Choromanski, K.M., Li, S., Likhosherstov, V., Dubey, K.A., Luo, S., He, D., Yang, Y., Sarlos, T., Weingarten, T., Weller, A.: Learning a fourier transform for linear relative positional encodings in transformers. arXiv preprint arXiv:2302.01925 (2023)

  21. Schnabel, R., Wahl, R., Klein, R.: Efficient ransac for point-cloud shape detection. In: Computer Graphics Forum, vol. 26, pp. 214–226 (2007). Wiley Online Library

  22. Aiger, D., Mitra, N.J., Cohen-Or, D.: 4-points congruent sets for robust pairwise surface registration. In: ACM SIGGRAPH 2008 Papers, pp. 1–10 (2008)

  23. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (fpfh) for 3d registration. In: 2009 IEEE International Conference on Robotics and Automation, pp. 3212–3217 (2009). IEEE

  24. Yang, J., Li, K., Li, K., Lai, Y.-K.: Sparse non-rigid registration of 3d shapes. In: Computer Graphics Forum, vol. 34, pp. 89–99 (2015). Wiley Online Library

  25. Li, K., Yang, J., Lai, Y.-K., Guo, D.: Robust non-rigid registration with reweighted position and transformation sparsity. IEEE Trans. Vis. Comput. Graph. 25(6), 2255–2269 (2018)

    Article  MATH  Google Scholar 

  26. Guo, K., Xu, F., Wang, Y., Liu, Y., Dai, Q.: Robust non-rigid motion tracking and surface reconstruction using l0 regularization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3083–3091 (2015)

  27. Zampogiannis, K., Fermüller, C., Aloimonos, Y.: Topology-aware non-rigid point cloud registration. IEEE Trans. Pattern Anal. Mach. Intell. 43(3), 1056–1069 (2019)

    Article  Google Scholar 

  28. Thomas, H., Qi, C.R., Deschaud, J.-E., Marcotegui, B., Goulette, F., Guibas, L.J.: Kpconv: Flexible and deformable convolution for point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6411–6420 (2019)

  29. Puy, G., Boulch, A., Marlet, R.: Flot: Scene flow on point clouds guided by optimal transport. In: European Conference on Computer Vision, pp. 527–544 (2020). Springer

  30. Besl, P.J., McKay, N.D.: Method for registration of 3-d shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp. 586–606 (1992). Spie

  31. Serafin, J., Grisetti, G.: Nicp: Dense normal based point cloud registration. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 742–749 (2015). IEEE

  32. Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2262–2275 (2010)

    Article  MATH  Google Scholar 

  33. Hirose, O.: A bayesian formulation of coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2269–2286 (2020)

    Article  MATH  Google Scholar 

  34. Zhou, Q.-Y., Park, J., Koltun, V.: Fast global registration. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part II 14, pp. 766–782 (2016). Springer

  35. Yang, H., Shi, J., Carlone, L.: Teaser: Fast and certifiable point cloud registration. IEEE Trans. Robot. 37(2), 314–333 (2020)

    Article  MATH  Google Scholar 

  36. Wu, W., Wang, Z., Li, Z., Liu, W., Fuxin, L.: Pointpwc-net: A coarse-to-fine network for supervised and self-supervised scene flow estimation on 3d point clouds. arXiv preprint arXiv:1911.12408 (2019)

  37. Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., Schindler, K.: Predator: Registration of 3d point clouds with low overlap. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4267–4276 (2021)

  38. Li, Y., Harada, T.: Lepard: Learning partial point cloud matching in rigid and deformable scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5554–5564 (2022)

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant (52375178), in part by the Natural Science Foundation of Shanxi Province under Grant (202203021211206) and (202203021211189),and partially funded by Graduate Student Innovation Program of Taiyuan University of Science and Technology (SY2023039).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronguo Zhang.

Additional information

Communicated by Chenggang Yan.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, R., Zhang, R., Hu, J. et al. Position-aware feature matching algorithm based non-rigid point cloud registration. Multimedia Systems 31, 55 (2025). https://doi.org/10.1007/s00530-024-01657-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-024-01657-6

Keywords