Skip to main content

Automatic Generation of Point Cloud Synthetic Dataset for Historical Building Representation

  • Conference paper
  • First Online:
Augmented Reality, Virtual Reality, and Computer Graphics (AVR 2019)

Abstract

3D point clouds represent a structured collection of elementary geometrical primitives. They can characterize size, shape, orientation and position of objects in space. In the field of building modelling and Cultural Heritage documentation and preservation, the classification and segmentation of point clouds result challenging because of the complexity and variety of point clouds due to irregular sampling, varying density, different types of objects. After moving into the era of multimedia big data, machine-learning approaches evolved into deep learning approaches, which are a more powerful and efficient way of dealing with the complexity of semantic object classification. Despite the great benefits that such approaches brought in automation, a great obstacle is to generate enough training data, which are nowadays manually labeled. This task results time-consuming for two reasons: the variety of point density and geometry, which are typical for the Cultural Heritage domain. In order to accelerate the development of powerful algorithms for CH point cloud classification, in this paper, it is presented a novel framework for automatic generation of synthetic dataset of point clouds. This task is performed using Blender, an open source software which permits to access to each point in an object creating one in a new mesh. The algorithms described allow to create a great number of point cloud synthetically, simulating a virtual laser scanner at a variable distance. Furthermore, these two algorithms not only work with a single object, but it is possible to create simultaneously many point clouds from a scene in Blender also with the use of an existing model of ancient architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.blender.org, last access July 16, 2019.

  2. 2.

    https://lasers.leica-geosystems.com/eu/sites/lasers.leica-geosystems.com.eu/files/leica_media/product_documents/blk/prod_docs_blk360/leica_blk360_spec_sheet.pdf.

References

  1. Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44

    Chapter  Google Scholar 

  2. Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4829–4837 (2016)

    Google Scholar 

  3. Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2758–2766 (2015)

    Google Scholar 

  4. Fan, H., Su, H., Guibas, L.: A point set generation network for 3D object reconstruction from a single image. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2463–2471. IEEE (2017)

    Google Scholar 

  5. Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic data for text localisation in natural images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2315–2324 (2016)

    Google Scholar 

  6. Hackel, T., Savinov, N., Ladicky, L., Wegner, J.D., Schindler, K., Pollefeys, M.: Semantic3D.NET: a new large-scale point cloud classification benchmark. arXiv preprint arXiv:1704.03847 (2017)

  7. Jaderberg, M., Simonyan, K., Vedaldi, A., Zisserman, A.: Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint arXiv:1406.2227 (2014)

  8. Klokov, R., Lempitsky, V.: Escape from cells: deep kd-networks for the recognition of 3D point cloud models. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 863–872. IEEE (2017)

    Google Scholar 

  9. Kniaz, V., Gorbatsevich, V., Mizginov, V.: Thermalnet: a deep convolutional network for synthetic thermal image generation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 42, 41 (2017)

    Article  Google Scholar 

  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  11. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  12. Li, Z., Zhang, L., Zhong, R., Fang, T., Zhang, L., Zhang, Z.: Classification of urban point clouds: a robust supervised approach with automatically generating training data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10(3), 1207–1220 (2017)

    Article  Google Scholar 

  13. Liciotti, D., Paolanti, M., Pietrini, R., Frontoni, E., Zingaretti, P.: Convolutional networks for semantic heads segmentation using top-view depth data in crowded environment. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1384–1389. IEEE (2018)

    Google Scholar 

  14. Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-January, pp. 4040–4048 (2016)

    Google Scholar 

  15. Miller Jr, E., Melton, R.: Automated generation of testcase datasets. In: ACM SIGPLAN Notices, vol. 10, pp. 51–58. ACM (1975)

    Google Scholar 

  16. Myers, G.: A dataset generator for whole genome shotgun sequencing. In: ISMB, pp. 202–210 (1999)

    Google Scholar 

  17. Pei, Y., Zaïane, O.: A synthetic data generator for clustering and outlier analysis. Department of Computing Science, University of Alberta, Edmonton, AB, Canada (2006)

    Google Scholar 

  18. Pierdicca, R., Malinverni, E.S., Piccinini, F., Paolanti, M., Felicetti, A., Zingaretti, P.: Deep convolutional neural network for automatic detection of damaged photovoltaic cells. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 42, pp. 893–900 (2018)

    Article  Google Scholar 

  19. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  20. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)

    Google Scholar 

  21. Quattrini, R., Pierdicca, R., Morbidoni, C.: Knowledge-based data enrichment for HBIM: exploring high-quality models using the semantic-web. J. Cult. Herit. 28, 129–139 (2017)

    Article  Google Scholar 

  22. Sturari, M., Paolanti, M., Frontoni, E., Mancini, A., Zingaretti, P.: Robotic platform for deep change detection for rail safety and security. In: 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2017)

    Google Scholar 

  23. Wang, T., Wu, D.J., Coates, A., Ng, A.Y.: End-to-end text recognition with convolutional neural networks. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 3304–3308. IEEE (2012)

    Google Scholar 

  24. Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018)

    Google Scholar 

  25. Yildirim, I., Kulkarni, T.D., Freiwald, W.A., Tenenbaum, J.B.: Efficient and robust analysis-by-synthesis in vision: a computational framework, behavioral tests, and modeling neuronal representations. In: Annual Conference of the Cognitive Science Society, vol. 1 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Pierdicca .

Editor information

Editors and Affiliations

A Appendix: Error

A Appendix: Error

To create a more realistic data set and close to what you would get from a laser scanner an error function was added. This error function simulates the error of a laser scan based on the distance of scanning. The function definition came from the data sheetFootnote 2 and is based on the function in (1)

$$\begin{aligned} y = a x^2 + b x + c \end{aligned}$$
(1)

where

  • \(a = -1e-5\)

  • \(b = 4e-4\)

  • \(c = 1e-3\)

which are obtained from the specifications sheet of LEICA BLK360 (See footnote 2). We have choose this tools since it is a medium level one and widespread among workers dealing with surveying.

Table 1. The values of the parameters of the system reported in Eq. 2

The Eq. 1 is developed by a simple parametric system (Eq. 2). The values of the parameters are reported in Table 1.

$$\begin{aligned} {\left\{ \begin{array}{ll} 4e-3 = a * 10^2 + b * 10 + c \\ 7e-3 = a * 20^2 + b * 20 + c \end{array}\right. } \end{aligned}$$
(2)

where c is chosen \(c=1e-3\). In the code environment this function is used as the variance description of the normal distribution used to simulate the error generation.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pierdicca, R., Mameli, M., Malinverni, E.S., Paolanti, M., Frontoni, E. (2019). Automatic Generation of Point Cloud Synthetic Dataset for Historical Building Representation. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2019. Lecture Notes in Computer Science(), vol 11613. Springer, Cham. https://doi.org/10.1007/978-3-030-25965-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-25965-5_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-25964-8

  • Online ISBN: 978-3-030-25965-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics