Abstract
Unmanned Aerial Vehicles (UAV) can quickly scan unknown environments to support a wide range of operations from intelligence gathering to search and rescue. LiDAR point clouds can give a detailed and accurate 3D representation of such unknown environments. However, LiDAR point clouds are often sparse and miss important information due to occlusions and limited sensor resolution. Several studies used inpainting techniques on LiDAR point clouds to complete the missing regions. However, these studies have three main limitations that hinder their use in UAV-based environment 3D reconstruction. First, existing studies focused only on synthetic data. Second, while the point clouds obtained from a UAV flying at moderate to high speeds can be severely distorted, none of the existing studies applied inpainting to UAV-based LiDAR point clouds. Third, all existing techniques considered inpainting isolated objects and did not generalise to inpainting complete environments. This paper aims to address these gaps by proposing an algorithm for inpainting point clouds of complete 3D environments obtained from a UAV. We use a supervised learning encoder-decoder model for point cloud inpainting and environment reconstruction. We tested the proposed approach for different LiDAR parameters and different environmental settings. The results demonstrate the ability of the system to inpaint the objects with a minimum average Chamfer Distance (CD) loss of 0.028 at a UAV speed of 5 ms\(^{-1}\). We present the results of the 3D reconstruction for a few test environments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning representations and generative models for 3D point clouds. In: International Conference on Machine Learning, pp. 40–49. PMLR (2018)
Behley, J., et al.: Semantickitti: a dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9297–9307 (2019)
Chen, J., Yi, J.S.K., Kahoush, M., Cho, E.S., Cho, Y.K.: Point cloud scene completion of obstructed building facades with generative adversarial inpainting. Sensors 20(18), 5029 (2020)
Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)
Foley, K.: Parrot AR drone 2.0 elite edition (Jun 2022)
Fu, Z., Hu, W., Guo, Z.: Point cloud inpainting on graphs from non-local self-similarity. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2137–2141. IEEE (2018)
Gu, J., et al.: Weakly-supervised 3D shape completion in the wild. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 283–299. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_17
Hong, X., Xiong, P., Ji, R., Fan, H.: Deep fusion network for image completion. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 2033–2042 (2019)
Huang, Z., Yu, Y., Xu, J., Ni, F., Le, X.: PF-Net: point fractal network for 3D point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7662–7670 (2020)
Insafutdinov, E., Dosovitskiy, A.: Unsupervised learning of shape and pose with differentiable point clouds. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Liu, H., Jiang, B., Song, Y., Huang, W., Yang, C.: Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 725–741. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_43
Mehendale, N., Neoge, S.: Review on lidar technology. Available at SSRN 3604309 (2020)
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Tchapmi, L.P., Kosaraju, V., Rezatofighi, H., Reid, I., Savarese, S.: Topnet: structural point cloud decoder. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 383–392 (2019)
Wang, P.: Research on comparison of lidar and camera in autonomous driving. In: Journal of Physics: Conference Series, vol. 2093, p. 012032. IOP Publishing (2021)
Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: interpretable unsupervised learning on 3d point clouds. arXiv preprint arXiv:1712.07262 2(3), 5 (2017)
Yeh, R.A., Chen, C., Yian Lim, T., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)
Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 2018 International Conference on 3D Vision (3DV), pp. 728–737. IEEE (2018)
Zhang, X., Le, X., Panotopoulou, A., Whiting, E., Wang, C.C.: Perceptual models of preference in 3D printing direction. ACM Trans. Graph. (TOG) 34(6), 1–12 (2015)
Zhang, X., Le, X., Wu, Z., Whiting, E., Wang, C.C.: Data-driven bending elasticity design by shell thickness. In: Computer Graphics Forum, vol. 35, pp. 157–166. Wiley Online Library (2016)
Zhao, B., Le, X., Xi, J.: A novel SDASS descriptor for fully encoding the information of a 3D local surface. Inf. Sci. 483, 363–382 (2019)
Acknowledgement
This work is funded by the Australian Research Council Grant DP200101211.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Talha, M., Hussein, A., Hossny, M. (2024). LiDAR Inpainting of UAV Based 3D Point Cloud Using Supervised Learning. In: Liu, T., Webb, G., Yue, L., Wang, D. (eds) AI 2023: Advances in Artificial Intelligence. AI 2023. Lecture Notes in Computer Science(), vol 14471. Springer, Singapore. https://doi.org/10.1007/978-981-99-8388-9_17
Download citation
DOI: https://doi.org/10.1007/978-981-99-8388-9_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8387-2
Online ISBN: 978-981-99-8388-9
eBook Packages: Computer ScienceComputer Science (R0)