Abstract
3D point cloud registration is a basic task in computer vision. In recent years, various of learning-based methods have been proposed to solve this problem. These methods effectively overcome the original problem of over-reliance on initial conditions and enhance the ability of obtaining the corresponding relationship. However, few methods pay enough attention to local features and tend to cause some mismatches. Therefore, this paper proposes two networks to extract local features sufficiently. To obtain a more accurate correspondence relationship between the point clouds, we propose a feature weight allocation network (FWANet), in which the expression ability of feature is enhanced using the proposed significant feature extraction module. Besides that, we utilize an interference elimination module to remove the interference points and enhance the internal correlation of point clouds. We also propose a spatial structural generation network (SSGNet), which fully utilizes the spatial location information to determine the spatial correspondence and generate a reliable connection after concatenating multi-dimensional features. At last, a complete feature space can be effectively captured after combining our FWANet with SSGNet together. We conducted multiple experiments on ModelNet40 datasets and achieved excellent results. Experimental results on four types of data demonstrate the superiority of our algorithm against the state-of-the-art ones. Our code will be available at https://github.com/liu-zikang/registration as soon as the paper is accepted.






Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Lu, W., Zhou, Y., Wan, G., Hou, S., Song, S.: L3-net: towards learning based lidar localization for autonomous driving. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 6382–6391 (2019). https://doi.org/10.1109/CVPR.2019.00655
Geiger, A., Ziegler, J., Stiller, C.: Stereoscan: Dense 3d reconstruction in real-time. In: 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 963–968 (2011). https://doi.org/10.1109/IVS.2011.5940405
Yoo, H., Choi, A., Mun, J.H.: Acquisition of point cloud in ct image space to improve accuracy of surface registration: application to neurosurgical navigation system. J. Mech. Sci. Technol. 34(6), 2667–2677 (2020)
Han, L., Xu, L., Bobkov, D., Steinbach, E., Fang, L.: Real-time global registration for globally consistent rgb-d slam. IEEE Trans. Rob. 35(2), 498–508 (2019). https://doi.org/10.1109/TRO.2018.2882730
Deschaud, J.-E.: Imls-slam: Scan-to-model matching based on 3d data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2480–2485 (2018). https://doi.org/10.1109/ICRA.2018.8460653
Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). https://doi.org/10.1109/34.121791
Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (fpfh) for 3d registration. In: 2009 IEEE International Conference on Robotics and Automation, pp. 3212–3217 (2009). https://doi.org/10.1109/ROBOT.2009.5152473
Salti, S., Tombari, F., di Stefano, L.: SHOT: unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 125, 251–264 (2014). https://doi.org/10.1016/j.cviu.2014.04.011
Campbell, D., Petersson, L.: Gogma: Globally-optimal gaussian mixture alignment. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5685–5694 (2016). https://doi.org/10.1109/CVPR.2016.613
Liu, Y., Chen, W., Song, Z., Wang, M.: Efficient global point cloud registration by matching rotation invariant features through translation search. In: European Conference on Computer Vision (2018)
Campbell, D., Petersson, L., Kneip, L., Li, H., Gould, S.: The alignment of the spheres: globally-optimal spherical mixture alignment for camera pose estimation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11788–11798 (2019). https://doi.org/10.1109/CVPR.2019.01207
Dym, N., Kovalsky, S.: Linearly converging quasi branch and bound algorithms for global rigid registration. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1628–1636 (2019). https://doi.org/10.1109/ICCV.2019.00171
Mellado, N., Aiger, D., Mitra, N.J.: Super4pcs: fast global pointcloud registration via smart indexing. Comput. Graphics Forum 33(5), 205–215 (2015)
Papazov, C., Haddadin, S., Parusel, S., Kai, K., Burschka, D.: Rigid 3d geometry matching for grasping of known objects in cluttered scenes. Int. J. Robot. Res. 31(4), 538–553 (2012)
Sarode, V., Li, X., Goforth, H., Aoki, Y., Srivatsan, R.A., Lucey, S., Choset, H.: Pcrnet: point cloud registration network using pointnet encoding. In: IEEE International Conference on Computer Vision (2019)
Feng, R., Shen, H., Bai, J., Li, X.: Advances and opportunities in remote sensing image geometric registration: a systematic review of state-of-the-art approaches and future research directions. Geosci. Remote Sens. 4, 9 (2021)
Wang, Y., Solomon, J.M.: Prnet: self-supervised learning for partial-to-partial registration. In: Advances in Neural Information Processing Systems, vol. 32 (2019). https://proceedings.neurips.cc/paper/2019/file/ebad33b3c9fa1d10327bb55f9e79e2f3-Paper.pdf
Kurobe, A., Sekikawa, Y., Ishikawa, K., Saito, H.: Corsnet: 3d point cloud registration by deep neural network. IEEE Robot. Autom. Lett. 5(3), 3960–3966 (2020). https://doi.org/10.1109/LRA.2020.2970946
Gojcic, Z., Zhou, C., Wegner, J.D., Wieser, A.: The perfect match: 3d point cloud matching with smoothed densities. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5540–5549 (2019). https://doi.org/10.1109/CVPR.2019.00569
Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image Vis. Comput. 10(3), 145–155 (1992). https://doi.org/10.1016/0262-8856(92)90066-C
Rusinkiewicz, S., Levoy, M.: Efficient variants of the icp algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152 (2001). https://doi.org/10.1109/IM.2001.924423
Yang, J., Li, H., Campbell, D., Jia, Y.: Go-icp: a globally optimal solution to 3d icp point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2241–2254 (2016). https://doi.org/10.1109/TPAMI.2015.2513405
Segal, A., Haehnel, D., Thrun, S.: Generalized-icp. In: Proceedings of Robotics: Science and Systems, Seattle, USA (2009). https://doi.org/10.15607/RSS.2009.V.021
Yang, J., Li, H., Jia, Y.: Go-icp: solving 3d registration efficiently and globally optimally. In: 2013 IEEE International Conference on Computer Vision, pp. 1457–1464 (2013). https://doi.org/10.1109/ICCV.2013.184
Campbell, D., Petersson, L., Kneip, L., Li, H., Gould, S.: The alignment of the spheres: globally-optimal spherical mixture alignment for camera pose estimation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11788–11798 (2019). https://doi.org/10.1109/CVPR.2019.01207
Han, J., Wang, F., Guo, Y., Zhang, C., He, Y.: An improved ransac registration algorithm based on region covariance descriptor. In: 2015 Chinese Automation Congress (CAC), pp. 746–751 (2015). https://doi.org/10.1109/CAC.2015.7382597
Zhou, Q.Y., Park, J., Koltun, V.: Fast global registration. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision—ECCV 2016. Cham, pp. 766–782 (2016)
Charles, R.Q., Su, H., Kaichun, M., Guibas, L.J.: Pointnet: deep learning on point sets for 3d classification and segmentation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77–85 (2017). https://doi.org/10.1109/CVPR.2017.16
Aoki, Y., Goforth, H., Srivatsan, R.A., Lucey, S.: Pointnetlk: robust & efficient point cloud registration using pointnet. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7156–7165 (2019). https://doi.org/10.1109/CVPR.2019.00733
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI ’81, Vancouver, BC, Canada, August 24–28, 1981, pp. 674–679 (1981)
Wang, Y., Solomon, J.: Deep closest point: Learning representations for point cloud registration. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3522–3531 (2019). https://doi.org/10.1109/ICCV.2019.00362
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017)
Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., Song, S.: Deepvcp: An end-to-end deep neural network for point cloud registration. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12–21 (2019). https://doi.org/10.1109/ICCV.2019.00010
Yew, Z.J., Lee, G.H.: Rpm-net: robust point matching using learned features. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11821–11830 (2020). https://doi.org/10.1109/CVPR42600.2020.01184
Mafarja, M.M., Mirjalili, S.: Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 260, 302–312 (2017). https://doi.org/10.1016/j.neucom.2017.04.053
Sinkhorn, R.: A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Stat. 35(2), 876–879 (1964)
Yuan, W., Eckart, B., Kim, K., Jampani, V., Fox, D., Kautz, J.: Deepgmr: learning latent Gaussian mixture models for registration. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision—ECCV 2020. Cham, pp. 733–750 (2020)
Li, J., Zhang, C., Xu, Z., Zhou, H., Zhang, C.: Iterative distance-aware similarity matrix convolution with mutual-supervised point elimination for efficient point cloud registration. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision—ECCV 2020. Cham, pp. 378–394 (2020)
Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., Schindler, K.: Predator: rgistration of 3d point clouds with low overlap. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4265–4274 (2021). https://doi.org/10.1109/CVPR46437.2021.00425
Fu, K., Liu, S., Luo, X., Wang, M.: Robust point cloud registration framework based on deep graph matching. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8889–8898 (2021). https://doi.org/10.1109/CVPR46437.2021.00878
Hamed Mozaffari, M., Lee, W.-S.: Encoder-decoder cnn models for automatic tracking of tongue contours in real-time ultrasound data. Methods 179, 26–36 (2020)
Mozaffari, M.H., Lee, W.-S.: Semantic segmentation with peripheral vision. In: Advances in Visual Computing, pp. 421–429 (2020). https://doi.org/10.1007/978-3-030-64559-533
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929 (2016). https://doi.org/10.1109/CVPR.2016.319
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E.Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8024–8035 (2019). https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: a deep representation for volumetric shapes. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7–12, 2015, pp. 1912–1920 (2015). https://doi.org/10.1109/CVPR.2015.7298801
Li, D., He, K., Wang, L., Zhang, D.: Local feature extraction network with high correspondences for 3d point cloud registration. Appl. Intell. (2022). https://doi.org/10.1007/s10489-021-03055-1
Funding
This study was supported by the National Natural Science Foundation of China (No. 62171314), and the recipient of the support was Kai He.
Author information
Authors and Affiliations
Contributions
Zikang Liu and Kai He were involved in the conceptualization; Zikang Liu contributed to the methodology; Zikang Liu contributed to the software; Zikang Liu and Kai He assisted in the validation; Zikang Liu helped in the formal analysis; Dazhuang Zhang was involved in the investigation; Lei Wang contributed to the resources; Dazhuang Zhang contributed to the data curation; Lei Wang was involved in writing—original draft preparation; Kai He contributed to writing—review and editing and assisted in the supervision, project administration and funding acquisition. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Code availability
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, Z., He, K., Zhang, D. et al. Local feature guidance framework for robust 3D point cloud registration. Vis Comput 39, 6459–6472 (2023). https://doi.org/10.1007/s00371-022-02739-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-022-02739-0