Abstract
Depth completion, the task of predicting dense depth maps from given depth maps of sparse, is an important topic in computer vision. To cope with the task, both traditional image processing- based and data-driven deep learning-based algorithms have been established in the literature. In general, traditional algorithms, built upon non-learnable methods such as interpolation and custom kernels, can handle well flat regions but may blunt sharp edges. Deep learning-based algorithms, despite their strengths in many aspects, still have several limits, e.g., their performance depends heavily on the quality of the given sparse maps, and the dense maps they produced may contain artifacts and are often poor in terms of geometric consistency. To tackle these issues, in this work we propose a simple yet effective algorithm that aims to combine the strengths of both the traditional image processing techniques and the prevalent deep learning methods. Namely, given a sparse depth map, our algorithm first generates a semi-dense map and a 3D pose map using the adaptive densification module (ADM) and the coordinate projection module (CPM), respectively, and then input the obtained maps into a two-branch convolutional neural network so as to produce the final dense depth map. The proposed algorithm is evaluated on both challenging outdoor dataset: KITTI and indoor dataset: NYUv2, the experimental results show that our method performs better than some existing methods.










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
References
Wang, Y., Chao, W.-L., Garg, D., Hariharan, B., Campbell, M., Weinberger, K.Q.: Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8445–8453 (2019)
Wang, B., Wang, Q., Cheng, J.C., Song, C., Yin, C.: Vision-assisted bim reconstruction from 3d lidar point clouds for mep scenes. Autom. Constr. 133, 103997 (2022)
Wu, Q., Yang, H., Wei, M., Remil, O., Wang, B., Wang, J.: Automatic 3d reconstruction of electrical substation scene from lidar point cloud. ISPRS J. Photogramm. Remote. Sens. 143, 57–71 (2018)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite, In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE 2012, pp. 3354–3361 (2012)
Hawe, S., Kleinsteuber, M., Diepold, K.: Dense disparity maps from sparse disparity measurements, In: 2011 International Conference on Computer Vision. IEEE, pp. 2126–2133 (2011)
Liu, L.-K., Chan, S.H., Nguyen, T.Q.: Depth reconstruction from sparse samples: Representation, algorithm, and sampling. IEEE Trans. Image Process. 24(6), 1983–1996 (2015)
Ku, J., Harakeh, A., Waslander, S.L.: In defense of classical image processing: Fast depth completion on the cpu, In: 2018 15th Conference on Computer and Robot Vision (CRV). IEEE, pp. 16–22 (2018)
Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with cnns: Depth completion and semantic segmentation, In: 2018 International Conference on 3D Vision (3DV). IEEE, pp. 52–60 (2018)
Qiu, J., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., Pollefeys, M.: Deeplidar: Deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3313–3322 (2019)
Schuster, R., Wasenmuller, O., Unger, C., Stricker, D.: Ssgp: Sparse spatial guided propagation for robust and generic interpolation, In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 197–206 (2021)
Krauss, B., Schroeder, G., Gustke, M., Hussein, A.: Deterministic guided lidar depth map completion, In: IEEE Intelligent Vehicles Symposium (IV). IEEE 2021, pp. 824–831 (2021)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst., vol. 25, (2012)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need, Advances in Neural Information Processing Systems, vol. 30 (2017)
Zhao, T., Pan, S., Gao, W., Sheng, C., Sun, Y., Wei, J.: Attention unet++ for lightweight depth estimation from sparse depth samples and a single rgb image. Vis. Comput. 38(5), 1619–1630 (2022)
Wang, Y., Zhong, F., Peng, Q., Qin, X.: Depth map enhancement based on color and depth consistency. Vis. Comput. 30(10), 1157–1168 (2014)
Xiao, B., Da, F.: Three-stage generative network for single-view point cloud completion, Vis. Comput., 1–10 (2021)
Mu, T.-J., Wang, J.-H., Du, S.-P., Hu, S.-M.: Stereoscopic image completion and depth recovery. Vis. Comput. 30(6), 833–843 (2014)
Liu, Q., Zhao, J., Cheng, C., Sheng, B., Ma, L.: Pointalcr: adversarial latent gan and contrastive regularization for point cloud completion. Vis. Comput. 38(9), 3341–3349 (2022)
Mo, H., Li, B., Shi, W., Zhang, X.: Cross-based dense depth estimation by fusing stereo vision with measured sparse depth, Vis. Comput., 1–12 (2022)
Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from rgbd images, In: European Conference on Computer Vision. Springer, pp. 746–760 (2012)
Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A., Sparsity invariant cnns, In: International Conference on 3D Vision (3DV). IEEE 2017, 11–20 (2017)
Eldesokey, A., Felsberg, M., Khan, F.S.: Propagating confidences through cnns for sparse data regression, arXiv:1805.11913 (2018)
Huang, Z., Fan, J., Cheng, S., Yi, S., Wang, X., Li, H.: Hms-net: Hierarchical multi-scale sparsity-invariant network for sparse depth completion. IEEE Trans. Image Process. 29, 3429–3441 (2019)
Van Gansbeke, W., Neven, D., De Brabandere, B., Van Gool, L.: Sparse and noisy lidar completion with rgb guidance and uncertainty, In: 16th International Conference on Machine Vision Applications (MVA). IEEE 2019, pp. 1–6 (2019)
Ma, F., Cavalheiro, G.V., Karaman, S.: Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera, In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp. 3288–3295 (2019)
Li, A., Yuan, Z., Ling, Y., Chi, W., Zhang, C. et al.: A multi-scale guided cascade hourglass network for depth completion, In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 32–40 (2020)
Zhu, Y., Dong, W., Li, L., Wu, J., Li, X., Shi, G.: Robust depth completion with uncertainty-driven loss functions, arXiv:2112.07895 (2021)
Imran, S., Liu, X., Morris, D.: Depth completion with twin surface extrapolation at occlusion boundaries, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2583–2592 (2021)
Cheng, X., Wang, P., Yang, R.: Learning depth with convolutional spatial propagation network. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2361–2379 (2019)
Lopez-Rodriguez, A., Busam, B., Mikolajczyk, K.: Project to adapt: Domain adaptation for depth completion from noisy and sparse sensor data, In: Proceedings of the Asian Conference on Computer Vision (2020)
Shivakumar, S.S., Nguyen, T., Miller, I.D., Chen, S.W., Kumar, V., Taylor, C.J., Dfusenet: Deep fusion of rgb and sparse depth information for image guided dense depth completion, In: IEEE Intelligent Transportation Systems Conference (ITSC). IEEE 2019, pp. 13–20 (2019)
Gu, J., Xiang, Z., Ye, Y., Wang, L.: Denselidar: A real-time pseudo dense depth guided depth completion network. IEEE Robot. Autom. Lett. 6(2), 1808–1815 (2021)
Hu, M., Wang, S., Li, B., Ning, S., Fan, L., Gong, X.: Penet: Towards precise and efficient image guided depth completion, In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 13 656–13 662
Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2019)
Du, W., Chen, H., Yang, H., Zhang, Y.: Depth completion using geometry-aware embedding, arXiv:2203.10912 (2022)
Yan, L., Liu, K., Gao, L.: Dan-conv: Depth aware non-local convolution for lidar depth completion. Electron. Lett. 57(20), 754–757 (2021)
Zhao, S., Gong, M., Fu, H., Tao, D.: Adaptive context-aware multi-modal network for depth completion. IEEE Trans. Image Process. 30, 5264–5276 (2021)
Cheng, X., Wang, P., Guan, C., Yang, R.: Cspn++: Learning context and resource aware convolutional spatial propagation networks for depth completion, In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 10 615–10 622 (2020)
Yan, Z., Wang, K., Li, X., Zhang, Z., Xu, B., Li, J., Yang, J.: Rignet: Repetitive image guided network for depth completion, arXiv:2107.13802 (2021)
Liu, L., Song, X., Lyu, X., Diao, J., Wang, M., Liu, Y., Zhang, L.: Fcfr-net: Feature fusion based coarse-to-fine residual learning for monocular depth completion, arXiv–2012, (2020)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All the authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xu, J., Zhu, Y., Wang, W. et al. A real-time semi-dense depth-guided depth completion network. Vis Comput 40, 87–97 (2024). https://doi.org/10.1007/s00371-022-02767-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-022-02767-w