Skip to main content

3D-C2FT: Coarse-to-Fine Transformer for Multi-view 3D Reconstruction

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Abstract

Recently, the transformer model has been successfully employed for the multi-view 3D reconstruction problem. However, challenges remain in designing an attention mechanism to explore the multi-view features and exploit their relations for reinforcing the encoding-decoding modules. This paper proposes a new model, namely 3D coarse-to-fine transformer (3D-C2FT), by introducing a novel coarse-to-fine (C2F) attention mechanism for encoding multi-view features and rectifying defective voxel-based 3D objects. C2F attention mechanism enables the model to learn multi-view information flow and synthesize 3D surface correction in a coarse to fine-grained manner. The proposed model is evaluated by ShapeNet and Multi-view Real-life voxel-based datasets. Experimental results show that 3D-C2FT achieves notable results and outperforms several competing models on these datasets.

L. C. O. Tiong and D. Sigmund—These authors have contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Source Code URL: https://github.com/tiongleslie/3D-C2FT/.

References

  1. Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. arXiv e-prints (2020). https://arxiv.org/abs/2005.00928

  2. Burchfiel, B., Konidaris, G.: Bayesian eigenobjects: a unified framework for 3D robot perception. In: Robotics: Science and Systems, vol. 13 (2017)

    Google Scholar 

  3. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3d object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  4. Dosovitskiy, A., et al.: An image is worth 16 \(\times \) 16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  5. Gao, Y., Luo, J., Qiu, H., Wu, B.: Survey of structure from motion. In: Proceedings of 2014 International Conference on Cloud Computing and Internet of Things, pp. 72–76 (2014)

    Google Scholar 

  6. Groen, I.I.A., Baker, C.I.: Previews scenes in the human brain: comparing 2D versus 3D representations. Neuron 101(1), 8–10 (2019)

    Article  Google Scholar 

  7. Han, X.F., Laga, H., Bennamoun, M.: Image-based 3D object reconstruction: state-of-the-art and trends in the deep learning era. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1578–1604 (2021)

    Article  Google Scholar 

  8. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708 (2017)

    Google Scholar 

  9. Jabłoński, S., Martyn, T.: Real-time voxel rendering algorithm based on screen space billboard voxel buffer with sparse lookup textures. In: 24th Conference on Computer Graphics, Visualization and Computer Vision, pp. 27–36 (2016)

    Google Scholar 

  10. Kanzler, M., Rautenhaus, M., Westermann, R.: A voxel-based rendering pipeline for large 3d line sets. IEEE Trans. Visual Comput. Graph. 25(7), 2378–2391 (2019)

    Article  Google Scholar 

  11. Kar, A., Häne, C., Malik, J.: Learning a multi-view stereo machine. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), pp. 364–375. Curran Associates, Inc. (2017)

    Google Scholar 

  12. Kargas, A., Loumos, G., Varoutas, D.: Using different ways of 3D reconstruction of historical cities for gaming purposes: the case study of Nafplio. Heritage 2(3), 1799–1811 (2019)

    Article  Google Scholar 

  13. Kniaz, V.V., Knyaz, V.A., Remondino, F., Bordodymov, A., Moshkantsev, P.: Image-to-voxel model translation for 3d scene reconstruction and segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 105–124. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_7

    Chapter  Google Scholar 

  14. Malik, J., et al.: HandVoxNet: deep voxel-based network for 3d hand shape and pose estimation from a single depth map. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7111–7120 (2020)

    Google Scholar 

  15. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3d reconstruction in function space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  16. Nabil, M., Saleh, F.: 3D reconstruction from images for museum artefacts: a comparative study. In: International Conference on Virtual Systems and Multimedia (VSMM), pp. 257–260. IEEE (2014)

    Google Scholar 

  17. Nguyen, T.Q., Salazar, J.: Transformers without tears: improving the normalization of self-attention. In: Proceedings of the 16th International Conference on Spoken Language Translation, Hong Kong (2019)

    Google Scholar 

  18. Park, N., Kim, S.: How do vision transformers work? In: International Conference on Learning Representations (ICLR) (2022)

    Google Scholar 

  19. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS), pp. 8024–8035 (2019)

    Google Scholar 

  20. Păvăloiu, I.B., Vasilăţeanu, A., Goga, N., Marin, I., Ilie, C., Ungar, A., Pătraşcu, I.: 3D dental reconstruction from CBCT data. In: International Symposium on Fundamentals of Electrical Engineering (ISFEE), pp. 4–9 (2014)

    Google Scholar 

  21. Roointan, S., Tavakolian, P., Sivagurunathan, K.S., Floryan, M., Mandelis, A., Abrams, S.H.: 3D dental subsurface imaging using enhanced truncated correlation-photothermal coherence tomography. Sci. Rep. 9(1), 1–12 (2019)

    Article  Google Scholar 

  22. Shi, Q., Li, C., Wang, C., Luo, H., Huang, Q., Fukuda, T.: Design and implementation of an omnidirectional vision system for robot perception. Mechatronics 41, 58–66 (2017)

    Article  Google Scholar 

  23. Shi, Z., Meng, Z., Xing, Y., Ma, Y., Wattenhofer, R.: 3D-RETR: end-to-end single and multi-view 3D reconstruction with transformers. In: British Machine Vision Conference (BMVC), pp. 1–14 (2021)

    Google Scholar 

  24. Silveira, G., Malis, E., Rives, P.: An efficient direct approach to visual SLAM. IEEE Trans. Rob. 24(5), 969–979 (2008)

    Article  Google Scholar 

  25. Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs. In: IEEE International Conference on Computer Vision (ICCV), pp. 2088–2096 (2017)

    Google Scholar 

  26. Tron, R., Vidal, R.: Distributed 3-D localization of camera sensor networks from 2-D image Measurements. IEEE Trans. Autom. Control 59(12), 3325–3340 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  27. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), vol. 30, pp. 6000–6010 (2017)

    Google Scholar 

  28. Wang, D., et al.: Multi-view 3D reconstruction with transformer. In: International Conference on Computer Vision (ICCV), pp. 5722–5731 (2021)

    Google Scholar 

  29. Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3d mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_4

    Chapter  Google Scholar 

  30. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  31. Wilson, K., Snavely, N.: Robust global translations with 1DSfM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 61–75. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10578-9_5

    Chapter  Google Scholar 

  32. Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912–1920 (2015)

    Google Scholar 

  33. Xie, H., Yao, H., Sun, X., Zhou, S., Zhang, S.: Pix2Vox: context-aware 3D reconstruction from single and multi-view images. In: IEEE International Conference on Computer Vision (ICCV), pp. 2690–2698 (2019)

    Google Scholar 

  34. Xie, H., Yao, H., Zhang, S., Zhou, S., Sun, W.: Pix2Vox++: multi-scale context-aware 3D object reconstruction from single and multiple images. Int. J. Comput. Vis. 128(12), 2919–2935 (2020)

    Article  Google Scholar 

  35. Yagubbayli, F., Tonioni, A., Tombari, F.: LegoFormer: transformers for block-by-block multi-view 3D reconstruction. arXiv e-prints (2021). http://arxiv.org/abs/2106.12102

  36. Yang, B., Wang, S., Markham, A., Trigoni, N.: Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction. Int. J. Comput. Vis. 128(1), 53–73 (2020)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew Beng Jin Teoh .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 11840 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tiong, L.C.O., Sigmund, D., Teoh, A.B.J. (2023). 3D-C2FT: Coarse-to-Fine Transformer for Multi-view 3D Reconstruction. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13841. Springer, Cham. https://doi.org/10.1007/978-3-031-26319-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26319-4_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26318-7

  • Online ISBN: 978-3-031-26319-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics