Skip to main content

Reconstruction of Cultural Heritage 3D Models from Sparse Point Clouds Using Implicit Neural Representations

  • Conference paper
  • First Online:
Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges (ICPR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13645))

Included in the following conference series:

  • 306 Accesses

Abstract

Creating accessible museums and exhibitions is a key factor to today’s society that strives for inclusivity. Visually-impaired people can benefit from manually examining pieces of an exhibition to better understand the features and shapes of these objects. Unfortunately, this is rarely possible, since such items are usually behind protective barriers due to their rarity, worn condition, and/or antiquity. Nevertheless, this can be achieved by 3D printed replicas of these collections. The fabrication of copies through 3D printing is much easier and less time-consuming compared to the manual replication of such items, which enables museums to acquire copies of other exhibitions more efficiently. In this paper, an accessibility-oriented methodology for reconstructing exhibits from sparse 3D models is presented. The proposed methodology introduces a novel periodic and parametric activation function, named WaveShaping (WS), which is utilized by a multi-layer perceptron (MLP) to reconstruct 3D models from coarsely retrieved 3D point clouds. The MLP is trained to learn a continuous function that describes the coarse representation of a 3D model. Then, the MLP is regarded as a continuous implicit representation of the model; hence, it can interpolate data points to refine and restore regions of the model. The experimental evaluation on 3D models taken from the ShapeNet dataset indicates that the novel WS activation function can improve the 3D reconstruction performance for given coarse point cloud model representations.

This work is supported by the project “Smart Tourist” (MIS 5047243) which is implemented under the Action “Reinforcement of the Research and Innovation Infrastructure”, funded by the Operational Programme “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. 3D objects - Archaeological site of Delphi - Museum of Delphi. https://delphi.culture.gr/digital-tour/digital-objects-3d/

  2. AliceVision: Meshroom: A 3D reconstruction software (2018). https://github.com/alicevision/meshroom

  3. Bagautdinov, T., Wu, C., Saragih, J., Fua, P., Sheikh, Y.: Modeling facial geometry using compositional VAEs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3877–3886 (2018)

    Google Scholar 

  4. Ballarin, M., Balletti, C., Vernier, P.: Replicas in cultural heritage: 3D printing and the museum experience. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 42(2), 55–62 (2018)

    Article  Google Scholar 

  5. Barrow, H.G., Tenenbaum, J.M., Bolles, R.C., Wolf, H.C.: Parametric correspondence and chamfer matching: two new techniques for image matching. Technical report, Sri International Menlo Park CA Artificial Intelligence Center (1977)

    Google Scholar 

  6. Carrizosa, H.G., Sheehy, K., Rix, J., Seale, J., Hayhoe, S.: Designing technologies for museums: accessibility and participation issues. J. Enabling Technol. 14(1), 31–39 (2020)

    Article  Google Scholar 

  7. Chabra, R., et al.: Deep local shapes: learning local SDF priors for detailed 3D reconstruction. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 608–625. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_36

    Chapter  Google Scholar 

  8. Chang, A.X., et al.: ShapeNet: An Information-Rich 3D Model Repository. Technical report. arXiv:1512.03012, Stanford University – Princeton University – Toyota Technological Institute at Chicago (2015)

  9. Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5939–5948 (2019)

    Google Scholar 

  10. Chibane, J., Pons-Moll, G., et al.: Neural unsigned distance fields for implicit function learning. Adv. Neural. Inf. Process. Syst. 33, 21638–21652 (2020)

    Google Scholar 

  11. Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G., et al.: Meshlab: an open-source mesh processing tool. In: Eurographics Italian Chapter Conference, Salerno, Italy, vol. 2008, pp. 129–136 (2008)

    Google Scholar 

  12. Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017)

    Google Scholar 

  13. Desvallées, A.: Key concepts of museology. Armand Colin (2010)

    Google Scholar 

  14. Fontanella, F., Colace, F., Molinara, M., Di Freca, A.S., Stanco, F.: Pattern recognition and artificial intelligence techniques for cultural heritage (2020)

    Google Scholar 

  15. Gomes, L., Bellon, O.R.P., Silva, L.: 3D reconstruction methods for digital preservation of cultural heritage: a survey. Pattern Recogn. Lett. 50, 3–14 (2014)

    Article  Google Scholar 

  16. Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. arXiv preprint arXiv:2002.10099 (2020)

  17. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3D surface generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 216–224 (2018)

    Google Scholar 

  18. Huovilainen, A.: Non-linear digital implementation of the Moog ladder filter. In: Proceedings of the International Conference on Digital Audio Effects (DAFx 2004), pp. 61–64 (2004)

    Google Scholar 

  19. Kantorovich, L.V.: Mathematical methods of organizing and planning production. Manage. Sci. 6(4), 366–422 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  20. Lazzarini, V., Timoney, J.: New perspectives on distortion synthesis for virtual analog oscillators. Comput. Music. J. 34(1), 28–40 (2010)

    Article  Google Scholar 

  21. Levina, E., Bickel, P.: The earth mover’s distance is the mallows distance: some insights from statistics. In: Proceedings Eighth IEEE International Conference on Computer Vision, ICCV 2001, vol. 2, pp. 251–256. IEEE (2001)

    Google Scholar 

  22. Lewiner, T., Lopes, H., Vieira, A.W., Tavares, G.: Efficient implementation of marching cubes’ cases with topological guarantees. J. Graph. Tools 8(2), 1–15 (2003)

    Article  Google Scholar 

  23. Ma, B., Han, Z., Liu, Y.S., Zwicker, M.: Neural-pull: learning signed distance functions from point clouds by learning to pull space onto surfaces. arXiv preprint arXiv:2011.13495 (2020)

  24. Mahmood, M.A., Visan, A.I., Ristoscu, C., Mihailescu, I.N.: Artificial neural network algorithms for 3D printing. Materials 14(1), 163 (2020)

    Article  Google Scholar 

  25. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4460–4470 (2019)

    Google Scholar 

  26. Neumüller, M., Reichinger, A., Rist, F., Kern, C.: 3D printing for cultural heritage: preservation, accessibility, research and education. In: Ioannides, M., Quak, E. (eds.) 3D Research Challenges in Cultural Heritage. LNCS, vol. 8355, pp. 119–134. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44630-0_9

    Chapter  Google Scholar 

  27. Osher, S., Fedkiw, R., Piechor, K.: Level set methods and dynamic implicit surfaces. Appl. Mech. Rev. 57(3), B15–B15 (2004)

    Article  Google Scholar 

  28. Pakarinen, J., Yeh, D.T.: A review of digital techniques for modeling vacuum-tube guitar amplifiers. Comput. Music. J. 33(2), 85–100 (2009)

    Article  Google Scholar 

  29. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: Deepsdf: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)

    Google Scholar 

  30. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). https://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  31. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional occupancy networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 523–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_31

    Chapter  Google Scholar 

  32. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  33. Rubner, Y., Tomasi, C., Guibas, L.J.: A metric for distributions with applications to image databases. In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 59–66. IEEE (1998)

    Google Scholar 

  34. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. Adv. Neural. Inf. Process. Syst. 33, 7462–7473 (2020)

    Google Scholar 

  35. Vaz, R., Freitas, D., Coelho, A.: Blind and visually impaired visitors’ experiences in museums: Increasing accessibility through assistive technologies. Int. J. Inclusive Mus. 13(2), 57 (2020)

    Article  Google Scholar 

  36. Wilson, P.F., Stott, J., Warnett, J.M., Attridge, A., Smith, M.P., Williams, M.A.: Evaluation of touchable 3D-printed replicas in museums. Curator Mus. J. 60(4), 445–465 (2017)

    Article  Google Scholar 

  37. Wu, Z., et al.: 3D shapenets: a deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)

    Google Scholar 

  38. Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 2018 International Conference on 3D Vision (3DV), pp. 728–737. IEEE (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitris K. Iakovidis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Triantafyllou, G., Dimas, G., Kalozoumis, P.G., Iakovidis, D.K. (2023). Reconstruction of Cultural Heritage 3D Models from Sparse Point Clouds Using Implicit Neural Representations. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13645. Springer, Cham. https://doi.org/10.1007/978-3-031-37731-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37731-0_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37730-3

  • Online ISBN: 978-3-031-37731-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics