Abstract
3D point clouds represent a structured collection of elementary geometrical primitives. They can characterize size, shape, orientation and position of objects in space. In the field of building modelling and Cultural Heritage documentation and preservation, the classification and segmentation of point clouds result challenging because of the complexity and variety of point clouds due to irregular sampling, varying density, different types of objects. After moving into the era of multimedia big data, machine-learning approaches evolved into deep learning approaches, which are a more powerful and efficient way of dealing with the complexity of semantic object classification. Despite the great benefits that such approaches brought in automation, a great obstacle is to generate enough training data, which are nowadays manually labeled. This task results time-consuming for two reasons: the variety of point density and geometry, which are typical for the Cultural Heritage domain. In order to accelerate the development of powerful algorithms for CH point cloud classification, in this paper, it is presented a novel framework for automatic generation of synthetic dataset of point clouds. This task is performed using Blender, an open source software which permits to access to each point in an object creating one in a new mesh. The algorithms described allow to create a great number of point cloud synthetically, simulating a virtual laser scanner at a variable distance. Furthermore, these two algorithms not only work with a single object, but it is possible to create simultaneously many point clouds from a scene in Blender also with the use of an existing model of ancient architectures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44
Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4829–4837 (2016)
Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2758–2766 (2015)
Fan, H., Su, H., Guibas, L.: A point set generation network for 3D object reconstruction from a single image. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2463–2471. IEEE (2017)
Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic data for text localisation in natural images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2315–2324 (2016)
Hackel, T., Savinov, N., Ladicky, L., Wegner, J.D., Schindler, K., Pollefeys, M.: Semantic3D.NET: a new large-scale point cloud classification benchmark. arXiv preprint arXiv:1704.03847 (2017)
Jaderberg, M., Simonyan, K., Vedaldi, A., Zisserman, A.: Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint arXiv:1406.2227 (2014)
Klokov, R., Lempitsky, V.: Escape from cells: deep kd-networks for the recognition of 3D point cloud models. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 863–872. IEEE (2017)
Kniaz, V., Gorbatsevich, V., Mizginov, V.: Thermalnet: a deep convolutional network for synthetic thermal image generation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 42, 41 (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)
Li, Z., Zhang, L., Zhong, R., Fang, T., Zhang, L., Zhang, Z.: Classification of urban point clouds: a robust supervised approach with automatically generating training data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10(3), 1207–1220 (2017)
Liciotti, D., Paolanti, M., Pietrini, R., Frontoni, E., Zingaretti, P.: Convolutional networks for semantic heads segmentation using top-view depth data in crowded environment. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1384–1389. IEEE (2018)
Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-January, pp. 4040–4048 (2016)
Miller Jr, E., Melton, R.: Automated generation of testcase datasets. In: ACM SIGPLAN Notices, vol. 10, pp. 51–58. ACM (1975)
Myers, G.: A dataset generator for whole genome shotgun sequencing. In: ISMB, pp. 202–210 (1999)
Pei, Y., Zaïane, O.: A synthetic data generator for clustering and outlier analysis. Department of Computing Science, University of Alberta, Edmonton, AB, Canada (2006)
Pierdicca, R., Malinverni, E.S., Piccinini, F., Paolanti, M., Felicetti, A., Zingaretti, P.: Deep convolutional neural network for automatic detection of damaged photovoltaic cells. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 42, pp. 893–900 (2018)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)
Quattrini, R., Pierdicca, R., Morbidoni, C.: Knowledge-based data enrichment for HBIM: exploring high-quality models using the semantic-web. J. Cult. Herit. 28, 129–139 (2017)
Sturari, M., Paolanti, M., Frontoni, E., Mancini, A., Zingaretti, P.: Robotic platform for deep change detection for rail safety and security. In: 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2017)
Wang, T., Wu, D.J., Coates, A., Ng, A.Y.: End-to-end text recognition with convolutional neural networks. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 3304–3308. IEEE (2012)
Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018)
Yildirim, I., Kulkarni, T.D., Freiwald, W.A., Tenenbaum, J.B.: Efficient and robust analysis-by-synthesis in vision: a computational framework, behavioral tests, and modeling neuronal representations. In: Annual Conference of the Cognitive Science Society, vol. 1 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix: Error
A Appendix: Error
To create a more realistic data set and close to what you would get from a laser scanner an error function was added. This error function simulates the error of a laser scan based on the distance of scanning. The function definition came from the data sheetFootnote 2 and is based on the function in (1)
where
-
\(a = -1e-5\)
-
\(b = 4e-4\)
-
\(c = 1e-3\)
which are obtained from the specifications sheet of LEICA BLK360 (See footnote 2). We have choose this tools since it is a medium level one and widespread among workers dealing with surveying.
The Eq. 1 is developed by a simple parametric system (Eq. 2). The values of the parameters are reported in Table 1.
where c is chosen \(c=1e-3\). In the code environment this function is used as the variance description of the normal distribution used to simulate the error generation.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Pierdicca, R., Mameli, M., Malinverni, E.S., Paolanti, M., Frontoni, E. (2019). Automatic Generation of Point Cloud Synthetic Dataset for Historical Building Representation. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2019. Lecture Notes in Computer Science(), vol 11613. Springer, Cham. https://doi.org/10.1007/978-3-030-25965-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-25965-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25964-8
Online ISBN: 978-3-030-25965-5
eBook Packages: Computer ScienceComputer Science (R0)