Abstract
The rapid evolution of drone technology has revolutionized data acquisition in the construction industry, offering a cost-effective and efficient method to monitor and map engineering structures. However, a significant challenge remains in transforming the drone-collected data into semantically meaningful 3D models. 3D reconstruction techniques usually lead to raw point clouds that are typically unstructured and lack the semantic and geometric information of objects needed for civil engineering tools. Our solution applies semantic segmentation algorithms to the data produced by NeRF (Neural Radiance Fields), effectively transforming drone-captured 3D volumetric representations into semantically rich 3D models. This approach offers a cost-effective and automated way to digitalize physical objects of construction sites into semantically annotated digital counterparts facilitating the development of digital twins or XR applications in the construction sector.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adams, S.M., Friedland, C.J.: A survey of unmanned aerial vehicle (UAV) usage for imagery collection in disaster research and management. In: 9th International Workshop on Remote Sensing for Disaster Response, vol. 8, pp. 1–8 (2011)
Anwar, N., Izhar, M.A., Najam, F.A.: Construction monitoring and reporting using drones and unmanned aerial vehicles (UAVs). In: The Tenth International Conference on Construction in the 21st Century (CITC-2010), pp. 2–4 (2018)
Ashour, R., et al.: Site inspection drone: a solution for inspecting and regulating construction sites. In: 2016 IEEE 59th International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1–4. IEEE (2016)
Barron, J.T., et al.: MIP-NERF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864 (2021)
Chacón, R., et al.: On the digital twinning of load tests in railway bridges. Case study: high speed railway network, extremadura, Spain. In: Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability, pp. 819–827. CRC Press (2022)
Chacón, R., Ramonell, C., Puig-Polo, C., Mirambell, E.: Geometrical digital twinning of a tapered, horizontally curved composite box girder bridge. ce/papers 5(4), 52–58 (2022)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)
Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)
DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236 (2018)
Ezequiel, C.A.F., et al.: UAV aerial imaging applications for post-disaster assessment, environmental management and infrastructure development. In: 2014 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 274–283. IEEE (2014)
Fu, X., et al.: Panoptic nerf: 3d-to-2d label transfer for panoptic urban scene segmentation. In: 2022 International Conference on 3D Vision (3DV), pp. 1–11. IEEE (2022)
Han, K.K., Golparvar-Fard, M.: Potential of big visual data and building information modeling for construction performance analytics: an exploratory study. Autom. Constr. 73, 184–198 (2017)
Hu, Q., et al.: Randla-net: efficient semantic segmentation of large-scale point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11108–11117 (2020)
Huang, H.-P., Tseng, H.-Y., Lee, H.-Y., Huang, J.-B.: Semantic view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 592–608. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_35
Koulalis, I., Dourvas, N., Triantafyllidis, T., Ioannidis, K., Vrochidis, S., Kompatsiaris, I.: A survey for image based methods in construction: from images to digital twins. In: Proceedings of the 19th International Conference on Content-based Multimedia Indexing, pp. 103–110 (2022)
Kundu, A., et al.: Panoptic neural fields: a semantic object-aware neural scene representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12871–12881 (2022)
Landrieu, L., Simonovsky, M.: Large-scale point cloud semantic segmentation with superpoint graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4558–4567 (2018)
Li, Y., Liu, C.: Applications of multirotor drone technologies in construction management. Int. J. Constr. Manag. 19(5), 401–412 (2019)
Liu, P., et al.: A review of rotorcraft unmanned aerial vehicle (UAV) developments and applications in civil engineering. Smart Struct. Syst. 13(6), 1065–1094 (2014)
Martin-Brualla, R., Radwan, N., Sajjadi, M.S., Barron, J.T., Dosovitskiy, A., Duckworth, D.: Nerf in the wild: neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7210–7219 (2021)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)
Mo, Y., Wu, Y., Yang, X., Liu, F., Liao, Y.: Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing 493, 626–646 (2022)
Outay, F., Mengash, H.A., Adnan, M.: Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: recent advances and challenges. Transport. Res. Part A: Policy Practice 141, 116–129 (2020)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 30 (2017)
Sarlin, P.E., Cadena, C., Siegwart, R., Dymczyk, M.: From coarse to fine: robust hierarchical localization at large scale. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12716–12725 (2019)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: learning feature matching with graph neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4938–4947 (2020)
Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016)
Tancik, M., et al.: Nerfstudio: a modular framework for neural radiance field development. arXiv preprint arXiv:2302.04264 (2023)
Tsouros, D.C., Triantafyllou, A., Bibi, S., Sarigannidis, P.G.: Data acquisition and analysis methods in UAV-based applications for precision agriculture. In: 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), pp. 377–384. IEEE (2019)
Ulku, I., Akagündüz, E.: A survey on deep learning-based architectures for semantic segmentation on 2d images. Appl. Artif. Intell. 36(1), 2032924 (2022)
Vacca, A., Onishi, H.: Drones: military weapons, surveillance or mapping tools for environmental monitoring? the need for legal framework is required. Transport. Res. Procedia 25, 51–62 (2017)
Vora, S., et al.: Nesf: neural semantic fields for generalizable semantic segmentation of 3d scenes. arXiv preprint arXiv:2111.13260 (2021)
Wang, Z., Wu, S., Xie, W., Chen, M., Prisacariu, V.A.: Nerf-: neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064 (2021)
Zhang, K., Riegler, G., Snavely, N., Koltun, V.: Nerf++: analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 (2020)
Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)
Zheng, L., et al.: Active scene understanding via online semantic reconstruction. In: Computer Graphics Forum, vol. 38, pp. 103–114. Wiley Online Library (2019)
Zhi, S., Laidlow, T., Leutenegger, S., Davison, A.J.: In-place scene labelling and understanding with implicit scene representation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15838–15847 (2021)
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ade20k dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 633–641 (2017)
Acknowledgments
This work was supported by the EC-funded research and innovation programme H2020 ASHVIN: “Digitising and transforming the European construction industry” under the grant agreement No.958161.
EU disclaimer: This publication reflects only author’s view and the European Commission is not responsible for any uses that may be made of the information it contains.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vrachnos, P., Krestenitis, M., Koulalis, I., Ioannidis, K., Vrochidis, S. (2024). A Framework for 3D Modeling of Construction Sites Using Aerial Imagery and Semantic NeRFs. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-53302-0_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53301-3
Online ISBN: 978-3-031-53302-0
eBook Packages: Computer ScienceComputer Science (R0)