skip to main content
10.1145/3446132.3446157acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacaiConference Proceedingsconference-collections
research-article

Colorful 3d reconstruction from a single image based on deep learning

Published:09 March 2021Publication History

ABSTRACT

Simultaneously recovering the 3D shape and its surface color from a single image has been a very challenging. In this paper, we substantially improve Soft Rasterizer that is a state-of-the art method for 3D color object reconstruction. The model adopts the structure of the encoder and decoder with a single image as input. Firstly, the features are extracted by the encoder, and then they are simultaneously sent to the shape generator and the color generator to obtain the shape estimate and the corresponding surface color, and finally the final colorful 3D model is rendered by the differentiable renderer. In order to ensure the details of the reconstructed 3D model, this paper introduces an attention mechanism into the encoder to further improve the reconstruction effect. For surface color reconstruction, we propose a combination loss. The experimental results show that compared with the 3D reconstruction network models 3D-R2N2 and OccNet, the intersection-over-union (IOU) increases by 10% and 3% in our model. Compared to the open source project SoftRas_O, the model increases by 3.8% on structural similarity (SSIM) and decreases by 1.2% on mean square error (MSE).

References

  1. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, 2015. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012(2015).Google ScholarGoogle Scholar
  2. Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. 2019. Learning to predict 3d objects with an interpolation-based differentiable renderer. In Advances in Neural Information Processing Systems. 9609–9619.Google ScholarGoogle Scholar
  3. Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 2016. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision. Springer, 628–644.Google ScholarGoogle ScholarCross RefCross Ref
  4. Haoqiang Fan, Hao Su, and Leonidas J Guibas. 2017. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition. 605–613.Google ScholarGoogle ScholarCross RefCross Ref
  5. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2414–2423.Google ScholarGoogle ScholarCross RefCross Ref
  6. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. science 313, 5786 (2006), 504–507.Google ScholarGoogle Scholar
  7. Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7132–7141.Google ScholarGoogle ScholarCross RefCross Ref
  8. Angjoo Kanazawa, Shubham Tulsiani, Alexei A Efros, and Jitendra Malik. 2018. Learning category-specific mesh reconstruction from image collections. In Proceedings of the European Conference on Computer Vision (ECCV). 371–386.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4401–4410.Google ScholarGoogle ScholarCross RefCross Ref
  10. Hiroharu Kato and Tatsuya Harada. 2019. Learning view priors for single-view 3d reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9778–9787.Google ScholarGoogle ScholarCross RefCross Ref
  11. Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3907–3916.Google ScholarGoogle ScholarCross RefCross Ref
  12. Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. 2019. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In Proceedings of the IEEE International Conference on Computer Vision. 7708–7717.Google ScholarGoogle ScholarCross RefCross Ref
  13. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. 2015. SMPL: A skinned multi-person linear model. ACM transactions on graphics (TOG) 34, 6 (2015), 1–16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Matthew M Loper and Michael J Black. 2014. OpenDR: An approximate differentiable renderer. In European Conference on Computer Vision. Springer, 154–169.Google ScholarGoogle ScholarCross RefCross Ref
  15. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4460–4470.Google ScholarGoogle ScholarCross RefCross Ref
  16. Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, and Shigeo Morishima. 2019. Siclope: Silhouette-based clothed people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4480–4490.Google ScholarGoogle ScholarCross RefCross Ref
  17. Yongbin Sun, Ziwei Liu, Yue Wang, and Sanjay E Sarma. 2018. Im2avatar: Colorful 3d reconstruction from a single image. arXiv preprint arXiv:1804.06375(2018).Google ScholarGoogle Scholar
  18. Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. 2018. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European Conference on Computer Vision (ECCV). 52–67.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600–612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. 2018. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV). 3–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, Bill Freeman, and Josh Tenenbaum. 2017. Marrnet: 3d shape reconstruction via 2.5 d sketches. In Advances in neural information processing systems. 540–550.Google ScholarGoogle Scholar
  22. Jiajun Wu, Chengkai Zhang, Xiuming Zhang, Zhoutong Zhang, William T Freeman, and Joshua B Tenenbaum. 2018. Learning shape priors for single-view 3d completion and reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV). 646–662.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1912–1920.Google ScholarGoogle Scholar
  24. Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Josh Tenenbaum, Bill Freeman, and Jiajun Wu. 2018. Learning to reconstruct shapes from unseen classes. Advances in neural information processing systems 31 (2018), 2257–2268.Google ScholarGoogle Scholar
  25. Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. 2016. Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging 3, 1 (2016), 47–57.Google ScholarGoogle ScholarCross RefCross Ref
  26. Silvia Zuffi, Angjoo Kanazawa, Tanya Berger-Wolf, and Michael Black. 2019. Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images “In the Wild”. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 5358–5367.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ACAI '20: Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence
    December 2020
    576 pages
    ISBN:9781450388115
    DOI:10.1145/3446132

    Copyright © 2020 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 9 March 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate173of395submissions,44%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format