Skip to main content

Abstract

Image style transfer is a popular and widely studied task in computer vision, and it aims to apply the style of the source image to the target while the target remains its original content. Style transfer is widely used in creating new images in 2D, but style transfer in 3D images still has many challenges. In this paper, we summarize the major existing methods of 3D style transfer, including traditional and neural network based approaches. Moreover, we discuss the application field and the future research direction in 3D style transfer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341-346. (2001)

    Google Scholar 

  2. Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015)

  3. Zheng, Y., Cohen‐Or, D., Mitra, N.J.: Smart variations: Functional substructures for part compatibility. In: Computer Graphics Forum, pp. 195-204. Wiley Online Library, (2013)

    Google Scholar 

  4. Ribeiro, P., Pereira, F.C., Marques, B.F., Leitão, B., Cardoso, A., Polo, I., de Marrocos, P.: A Model for Creativity in Creature Generation. In: GAME-ON, pp. 19–21. (2003)

    Google Scholar 

  5. Ma, C., Huang, H., Sheffer, A., Kalogerakis, E., Wang, R.: Analogy‐driven 3D style transfer. In: Computer Graphics Forum, pp. 175–184. Wiley Online Library, (2014)

    Google Scholar 

  6. Zhang, H., Blasetti, E.: 3D architectural form style transfer through machine learning. In: Proceedings of the 25th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), pp. 659–668. (2020)

    Google Scholar 

  7. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Advances in neural information processing systems 27, (2014)

    Google Scholar 

  8. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. (2017)

    Google Scholar 

  9. Ren, Y., Zheng, H.: Voxel-based 3D Neural Style Transfer. In: Proceedings of the 25th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), pp. 619–628. (2020)

    Google Scholar 

  10. Friedrich, T., Aulig, N., Menzel, S.: On the potential and challenges of neural style transfer for three-dimensional shape data. In: Rodrigues, H.C., et al. (eds.) EngOpt 2018, pp. 581–592. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-97773-7_52

    Chapter  Google Scholar 

  11. Mazeika, J., Whitehead, J.: Towards 3D Neural Style Transfer. In: AIIDE Workshops. (2018)

    Google Scholar 

  12. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660. (2017)

    Google Scholar 

  13. Friedrich, T., Menzel, S.: Standardization of gram matrix for improved 3D neural style transfer. In: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1375–1382. IEEE, (2019)

    Google Scholar 

  14. Friedrich, T., Wollstadt, P., Menzel, S.: The effects of non-linear operators in voxel-based deep neural networks for 3D style reconstruction. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1460–1468. IEEE, (2020)

    Google Scholar 

  15. Segu, M., Grinvald, M., Siegwart, R., Tombari, F.: 3dsnet: Unsupervised shape-to-shape 3d style transfer. arXiv preprint arXiv:2011.13388 (2020)

  16. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3d surface generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 216–224. (2018)

    Google Scholar 

  17. Gupta, K.: Neural mesh flow: 3d manifold mesh generation via diffeomorphic flows. University of California, San Diego (2020)

    Google Scholar 

  18. Alansary, A., et al.: Automatic view planning with multi-scale deep reinforcement learning agents. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 277–285. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_32

    Chapter  Google Scholar 

  19. Lu, X., et al.: Automatic view planning for cardiac MRI acquisition. In: Fichtinger, G., Martel, A., Peters, T. (eds.) MICCAI 2011. LNCS, vol. 6893, pp. 479–486. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23626-6_59

    Chapter  Google Scholar 

  20. Magnetti, C., Reynaud, H., Kainz, B.: Cross Modality 3D Navigation Using Reinforcement Learning and Neural Style Transfer. arXiv preprint arXiv:2111.03485 (2021)

  21. Yifan, W., Aigerman, N., Kim, V.G., Chaudhuri, S., Sorkine-Hornung, O.: Neural cages for detail-preserving 3d deformations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 75–83. (2020)

    Google Scholar 

  22. Regateiro, J., Boyer, E.: 3D Human Shape Style Transfer. arXiv preprint arXiv:2109.01587 (2021)

  23. Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Visual Comput. Graph. 26, 3365–3385 (2019)

    Article  Google Scholar 

  24. Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_43

    Chapter  Google Scholar 

Download references

Acknowledgment

This work was supported partly by the 2021 National pre-research project of Suzhou City University (2021SGY010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qifeng Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, Q., Sun, M., Wang, J. (2023). A Survey on 3D Style Transfer. In: Yu, S., Gu, B., Qu, Y., Wang, X. (eds) Tools for Design, Implementation and Verification of Emerging Information Technologies. TridentCom 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 489. Springer, Cham. https://doi.org/10.1007/978-3-031-33458-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33458-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33457-3

  • Online ISBN: 978-3-031-33458-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics