Skip to main content
Log in

Multi-space and detail-supplemented attention network for point cloud completion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The sparsity and incompleteness of point clouds generally result in challenges in point cloud analysis. Most existing point cloud completion methods use an individual Euclidean space feature to generate point clouds. Consequently, the generated point clouds are relatively rough. This paper proposes a multi-space and detail-supplemented attention point cloud completion network (MSDSA-Net). Here, the key is to utilize multi-space features to generate high-quality point clouds. First, we construct a dual-branch multi-space feature extractor (MSFE). A branch of the MSFE is a local-holistic geometric feature extractor based on Euclidean space and eigenvalue space. It can extract features with similar local geometric structures at points that are at a distance, to compensate for the missing part of the feature information of the point cloud. Another branch of the MSFE is a global feature extractor based on Euclidean space to extract the global features. Second, we continue to follow the coarse-to-fine decoding framework of general completion networks. However, in the fine generation stage, we propose a detail-supplemented (DS) module to supplement the features used to guide point cloud generation in detail. Extensive experiments demonstrate that our network has a good effect on the shape completion of point clouds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Ratnasingam S (2019) Deep camera: a fully convolutional neural network for image signal processing. In: Proceedings of 2019 IEEE/CVF international conference on computer vision workshop (ICCVW), IEEE, Seoul, Korea (South), pp 3868–3878

  2. Zabatani A, Surazhsky V, Sperling E, Moshe SB, Menashe O, Silver DH, Karni Z, Bronstein AM, Bronstein MM, Kimmel R (2019) Intel®; realsenseTM sr300 coded light depth camera. IEEE Trans Pattern Anal Mach Intell 42(10):2333–2345

    Article  Google Scholar 

  3. Li W, Wang FD, Xia GS (2020) A geometry-attentional network for ALS point cloud classification. ISPRS J Photogramm Remote Sensing 164:26–40

    Article  Google Scholar 

  4. Zhang M, You H, KP LS, Kuo CCJ (2020) Pointhop: an explainable machine learning method for point cloud classification. IEEE Trans Multimedia 22(7):1744–1755

    Article  Google Scholar 

  5. Lin Y, Vosselman G, Cao Y, Yang MY (2020) Active and incremental learning for semantic ALS point cloud segmentation. ISPRS J Photogramm Remote Sens 169:73–92

    Article  Google Scholar 

  6. Xie Y, Tian J, Zhu XX (2020) Linking points with labels in 3D: a review of point cloud semantic segmentation. IEEE Geosci Remote Sensing Magazine 8(4):38–59

    Article  Google Scholar 

  7. Rahman MM, Tan Y, Xue J, Lu K (2019) Recent advances in 3D object detection in the era of deep neural networks: a survey. IEEE Trans Image Process 29:2947–2962

    Article  MATH  Google Scholar 

  8. Zhao ZQ, Zheng P, St X, Wu X (2019) Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst 30(11):3212–3232

    Article  Google Scholar 

  9. Feng M, Zhang L, Lin X, Gilani SZ, Mian A (2020) Point attention network for semantic segmentation of 3d point clouds. Pattern Recogn 107:107446

    Article  Google Scholar 

  10. Jhaldiyal A, Chaudhary N (2022) Semantic segmentation of 3d lidar data using deep learning: a review of projection-based methods. Appl Intell:1–12

  11. Yuan W, Khot T, Held D, Mertz C, Hebert M (2018) PCN: point completion network. In: Proceedings of the 2018 International Conference on 3D Vision, IEEE, Verona, Italy, pp 728–737

  12. Tchapmi LP, Kosaraju V, Rezatofighi H, Reid I, Savarese S (2019) TopNet: structural point cloud decoder. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, long beach, USA, pp 383–392

  13. Wang X, Ang Jr MH, Lee GH (2020) Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 790–799

  14. Pan L, Chen X, Cai Z, Zhang J, Zhao H, Yi S, Liu Z (2021) Variational relational point completion network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Nashville, USA, pp 8520–8529

  15. Wen X, Li T, Han Z, Liu YS (2020) Point cloud completion by skip-attention network with hierarchical folding. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, USA, pp 1939–1948

  16. Qi CR, Su H, Mo K, Guibas LJ (2017) PointNet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, USA, pp 77–85

  17. Qi CR, Yi L, Su H, Guibas LJ (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the advances in neural information processing systems, Long Beach, USA, pp 5099–5108

  18. Liu M, Sheng L, Yang S, Shao J, Hu SM (2020) Morphing and sampling network for dense point cloud completion. In: Proceedings of the AAAI conference on artificial intelligence, New York, USA, vol 34, pp 11596–11603

  19. Wu H, Miao Y, Fu R (2021) Point cloud completion using multiscale feature fusion and cross-regional attention. Image Vis Comput 111:104193

    Article  Google Scholar 

  20. Xiang P, Wen X, Liu YS, Cao YP, Wan P, Zheng W, Han Z (2021) SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5499–5509

  21. Xie H, Yao H, Zhou S, Mao J, Zhang S, Sun W (2020) GRNEt: gridding residual network for dense point cloud completion. In: Proceedings of the european conference on computer vision. Springer, Newcastle University, UK, pp 365–381

  22. Zhang W, Yan Q, Xiao C (2020) Detail preserved point cloud completion via separated feature aggregation. In: Proceedings of the european conference on computer vision. Springer, Newcastle University, UK, pp 512–528

  23. Stutz D, Geiger A (2020) Learning 3d shape completion under weak supervision. Int J Comput Vis 128(5):1162–1181

    Article  MATH  Google Scholar 

  24. Yang Y, Feng C, Shen Y, Tian D (2018) FoldingNet: point cloud auto-encoder via deep grid deformation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, USA, pp 206–215

  25. Zhang J, Chen W, Wang Y, Vasudevan R, Johnson-Roberson M (2021) Point set voting for partial point cloud analysis. IEEE Robot Autom Lett 6(2):596–603

    Article  Google Scholar 

  26. Wen X, Xiang P, Han Z, Cao YP, Wan P, Zheng W, Liu YS (2021) PMP-Net: point cloud completion by learning multi-step point moving paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7443–7452

  27. Guo MH, Xu TX, Liu JJ, Liu ZN, Jiang PT, Mu TJ, Zhang SH, Martin RR, Cheng MM, Hu SM (2022) Attention mechanisms in computer vision: a survey. Computat Vis Med:1–38

  28. Mohammdi Farsani R, Pazouki E (2021) A transformer self-attention model for time series forecasting. J Electr Comput Eng Innovations (JECEI) 9(1):1–10

    Google Scholar 

  29. Niu Z, Zhong G, Yu H (2021) A review on the attention mechanism of deep learning. Neurocomput 452:48–62

    Article  Google Scholar 

  30. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inform Process Syst, vol 30

  31. Fan H, Su H, Guibas LJ (2017) A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Hawaii, America, pp 605–613

  32. Tatarchenko M, Richter SR, Ranftl R, Li Z, Koltun V, Brox T (2019) What do single-view 3d reconstruction networks learn?. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Los Angeles, USA, pp 3405–3414

  33. Wang X, Ang MH, Lee GH (2020) Point cloud completion by learning shape priors. In: Proceedings of 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, Las Vegas, USA, pp 10719–10726

  34. Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the KITTI dataset. Int J Robot Res 32(11):1231–1237

    Article  Google Scholar 

  35. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of international conference on learning representations, San Diego, USA

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under grants 62032022, 62176244, and 62006215, and the Natural Science Foundation of Zhejiang (CN) under grants LZ20F030001 and LQ20F030016.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feilong Cao.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiang, M., Ye, H., Yang, B. et al. Multi-space and detail-supplemented attention network for point cloud completion. Appl Intell 53, 14971–14985 (2023). https://doi.org/10.1007/s10489-022-04219-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04219-3

Keywords

Navigation