Skip to main content

Can OpenPose Be Used as a 3D Registration Method for 3D Scans of Cultural Heritage Artifacts

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

3D scanning of artifacts is an important tool for studying and preservation of a culture heritage. Systems for 3D reconstruction are constantly developing but due to the shape and size of artifacts it is usually necessary to perform 3D scanning from several different positions in space. This brings up the problem of 3D registration which is a process of aligning different point clouds. Software-based 3D registration methods typically require identifying the sufficient number of point correspondence pairs between different point clouds. These correspondences are frequently found manually and/or by introducing a specially designed objects in the scene. On the other hand, in this work we explore whether OpenPose, a well-known deep learning model, can be used to find corresponded point pairs between different views and eventually assure a successful 3D registration. OpenPose is trained to find patterns and keypoints on images containing people. We acknowledge that many artifacts are indeed human like postures and we test our ideas on finding correspondences using OpenPose. Furthermore, if an artifact is nothing like human like appearance, we demonstrate a method introducing in 3D scene a simple human like image, and in turn allowing OpenPose to facilitate 3D registration between 3D scans from different views. The proposed 3D registration pipeline is easily applicable to many existing 3D scanning solutions of artifacts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Levoy, M., et al.: The digital Michelangelo project: 3D scanning of large statues. In: SIGGRAPH (2000)

    Google Scholar 

  2. Salvi, J., Matabosch, C., Fofi, D., Forest, J.: A review of recent range image registration methods with accuracy evaluation. Image Vis. Comput. 25(5), 578–596 (2007)

    Article  Google Scholar 

  3. Tam, G.K.L., et al.: Registration of 3D point clouds and meshes: a survey from rigid to nonrigid. Trans. Vis. Comput. Graph. 19(7), 1199–1217 (2013)

    Article  Google Scholar 

  4. Cao, Z., Hidalgo Martinez, G., Simon, T., Wei, S., Sheikh, Y.A.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43(1), 172–186 (2019)

    Article  Google Scholar 

  5. CMU Panoptic Studio Dataset. http://domedb.perception.cs.cmu.edu/. Accessed 10 Oct 2020

  6. Brown, M., Song, M.: Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 39(1), 10–22 (2000)

    Article  Google Scholar 

  7. Barsky, S., Maria, P.: The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1239–1252 (2003)

    Article  Google Scholar 

  8. Mukherjee, D., Wu, Q.M.J., Wang, G.: A comparative experimental study of image feature detectors and descriptors. Mach. Vis. Appl. 26(4), 443–466 (2015). https://doi.org/10.1007/s00138-015-0679-9

    Article  Google Scholar 

  9. Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  10. Mur-Artal, R., Montiel, J., Tardos, J.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015)

    Article  Google Scholar 

  11. Salvi, J., Fernandez, S., Pribanic, T., Llado, X.: A state of the art in structured light patterns for surface profilometry. Pattern Recogn. 43(8), 2666–2680 (2010)

    Article  Google Scholar 

  12. Horaud, R., Hansard, M., Evangelidis, G., Ménier, C.: An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 27(7), 1005–1020 (2016). https://doi.org/10.1007/s00138-016-0784-4

    Article  Google Scholar 

  13. 3D scanning for cultural heritage conversation. https://www.factum-arte.com/pag/701/3D-Scanning-for-Cultural-Heritage-Conservation. Accessed Oct 2020

  14. Martins, A.F., Bessant, M., Manukyan, L., Milinkovitch, M.C.: R2 OBBIE-3D, a fast robotic high-resolution system for quantitative phenotyping of surface geometry and colour-texture. PLoS One 10(6), 1–18 (2015)

    Google Scholar 

  15. Pribanic, T., Petković, T., Donlić, M.: 3D registration based on the direction sensor measurements. Pattern Recogn. 88, 532–546 (2019)

    Google Scholar 

  16. Dıez, Y., Roure, F., Lladó, X., Salvi, J.: A qualitative review on 3D coarse registration methods. ACM Comput. Surv. 47(3), 45:1–45:36 (2015)

    Google Scholar 

  17. Rusinkiewicz, S., Levoy, M.: Efficient variants of the ICP algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152 (2001)

    Google Scholar 

  18. Point Cloud Library (PCL). http://www.pointclouds.org/. Accessed 10 Oct 2020

  19. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  20. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: ICLR (2016)

    Google Scholar 

  21. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)

    Google Scholar 

  22. Verma, N., Boyer, E., Verbeek, J.: FeaStNet: feature-steered graph convolutions for 3D shape analysis. In: CVPR, pp. 2598–2606 (2018)

    Google Scholar 

  23. Girdhar, R., Fouhey, D.F., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 484–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_29

    Chapter  Google Scholar 

  24. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  25. Riegler, G., Ulusoy, A.O., Geiger, A.: OctNet: learning deep 3D representations at high resolutions. In: CVPR (2017)

    Google Scholar 

  26. Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond Euclidean data. IEEE Sig. Process. Mag. 34, 18–42 (2017)

    Article  Google Scholar 

  27. Qi, C.R., Su, H., Mo, K.,Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR (2017)

    Google Scholar 

  28. Aoki, Y., Goforth, H., Srivatsan, R.A., Lucey, S.: PointNetLK: robust & efficient point cloud registration using PointNet. In: CVPR (2019)

    Google Scholar 

  29. Wang, Y., Solomon, J.M.: Deep closest point: learning representations for point cloud registration. In: IEEE International Conference on Computer Vision, ICCV 2019 (2019)

    Google Scholar 

  30. Wu, Z., Song, S., Khosla, A., Zhang, L., Tang, X., Xiao, J.: 3D shapenets: a deep representation for volumetric shape modeling. In: CVPR 2015 (2015)

    Google Scholar 

  31. Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., Funkhouser, T.: 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In: CVPR (2017)

    Google Scholar 

  32. Altwaijry, H., Veit, A., Belongie, S.: Learning to detect and match keypoints with deep architectures. In: British Machine Vision Conference (2016)

    Google Scholar 

  33. Suwajanakorn, S., Snavely, N., Tompson, J., Norouzi, M.: Discovery of latent 3D keypoints via end-to-end geometric reasoning. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018 (2018)

    Google Scholar 

  34. Challis, J.H.: A procedure for determining rigid body transformation parameters. J. Biomech. 28(6), 733–737 (1995)

    Article  Google Scholar 

Download references

Acknowledgment

This work has been fully supported by the Croatian Science Foundation under the project IP-2018-01-8118.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomislav Pribanić .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pribanić, T., Bojanić, D., Bartol, K., Petković, T. (2021). Can OpenPose Be Used as a 3D Registration Method for 3D Scans of Cultural Heritage Artifacts. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12667. Springer, Cham. https://doi.org/10.1007/978-3-030-68787-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68787-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68786-1

  • Online ISBN: 978-3-030-68787-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics