Skip to main content

ArtNeRF: A Stylized Neural Field for 3D-Aware Artistic Face Synthesis

  • Conference paper
  • First Online:
Pattern Recognition (ICPR 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15325))

Included in the following conference series:

  • 136 Accesses

Abstract

Recent advances in generative visual models and neural radiance fields have greatly boosted 3D-aware image synthesis and stylization tasks. However, previous NeRF-based work is limited to single scene stylization, training a model to generate 3D-aware artistic faces with arbitrary styles remains unsolved. We propose ArtNeRF, a novel face stylization framework derived from 3D-aware GAN to tackle this problem. In this framework, we utilize an expressive generator to synthesize stylized faces and a triple-branch discriminator module to improve the visual quality and style consistency of the generated faces. Specifically, a style encoder based on contrastive learning is leveraged to extract robust low-dimensional embeddings of style images, empowering the generator with the knowledge of various styles. To smooth the training process of cross-domain transfer learning, we propose an adaptive style blending module which helps inject style information and allows users to freely tune the level of stylization. We further introduce a neural rendering module to achieve efficient real-time rendering of images with higher resolutions. Extensive experiments demonstrate that ArtNeRF is versatile in generating high-quality 3D-aware artistic faces with arbitrary styles. Code is available at: https://github.com/silence-tang/ArtNeRF.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Huang, X., Liu, M. Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV), pp. 172-189 (2018)

    Google Scholar 

  2. Liu, M. Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., Kautz, J.: Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pp. 10551-10560 (2019)

    Google Scholar 

  3. Lee, H.Y., et al.: DRIT++: Diverse Image-to-Image Translation via Disentangled Representations. Int. J. Comput. Vis. 128, 2402–2417 (2020)

    Article  Google Scholar 

  4. Choi, Y., Uh, Y., Yoo, J., Ha, J. W.: Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 8188-8197 (2020)

    Google Scholar 

  5. Goodfellow, I., et al.: Generative adversarial nets. Advances in neural information processing systems, 27 (2014)

    Google Scholar 

  6. Chen, Y., Lai, Y. K., Liu, Y. J.: Cartoongan: Generative adversarial networks for photo cartoonization. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 9465-9474 (2018)

    Google Scholar 

  7. He, B., Gao, F., Ma, D., Shi, B., Duan, L. Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In Proceedings of the 26th ACM international conference on Multimedia, pp. 1172-1180 (2018)

    Google Scholar 

  8. Liu, M., Li, Q., Qin, Z., Zhang, G., Wan, P., Zheng, W.: Blendgan: Implicitly gan blending for arbitrary stylized face generation. Adv. Neural. Inf. Process. Syst. 34, 29710–29722 (2021)

    Google Scholar 

  9. Yang, S., Jiang, L., Liu, Z., Loy, C. C.: Pastiche master: Exemplar-based high-resolution portrait style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7693-7702 (2022)

    Google Scholar 

  10. Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.: Graf: Generative radiance fields for 3d-aware image synthesis. Adv. Neural. Inf. Process. Syst. 33, 20154–20166 (2020)

    Google Scholar 

  11. Chan, E. R., Monteiro, M., Kellnhofer, P., Wu, J., Wetzstein, G.: pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 5799-5809 (2021)

    Google Scholar 

  12. Niemeyer, M., Geiger, A.: Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11453-11464 (2021)

    Google Scholar 

  13. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  14. Deng, Y., Yang, J., Xiang, J., Tong, X.: Gram: Generative radiance manifolds for 3d-aware image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10673-10683 (2022)

    Google Scholar 

  15. Chan, E. R., Lin, C. Z., Chan, M. A., et al.: Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 16123-16133 (2022)

    Google Scholar 

  16. Abdal, R., Lee, H. Y., Zhu, P., et al.: 3davatargan: Bridging domains for personalized editable avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4552-4562 (2023)

    Google Scholar 

  17. Zhang, J., Lan, Y., Yang, S., et al.: Deformtoon3d: Deformable neural radiance fields for 3d toonification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9144-9154 (2023)

    Google Scholar 

  18. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8110-8119 (2020)

    Google Scholar 

  19. Miyato, T., Koyama, M.: cGANs with projection discriminator. arXiv preprint arXiv:1802.05637 (2018)

  20. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 3730-3738 (2015)

    Google Scholar 

  21. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1501-1510 (2017)

    Google Scholar 

  22. Li, B., Zhu, Y., Wang, Y., Lin, C.W., Ghanem, B., Shen, L.: Anigan: Style-guided generative adversarial networks for unsupervised anime face generation. IEEE Trans. Multimedia 24, 4077–4091 (2021)

    Article  Google Scholar 

  23. Chong, M. J., Forsyth, D.: Jojogan: One shot face stylization. In European Conference on Computer Vision (ECCV), pp. 128-152 (2022)

    Google Scholar 

  24. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 42(4), 139–1 (2023)

    Article  Google Scholar 

  25. Zhang, C., Chen, Y., Fu, Y., et al.: Styleavatar3d: Leveraging image-text diffusion models for high-fidelity 3d avatar generation. arXiv preprint arXiv:2305.19012 (2023)

Download references

Acknowledgements

This work is partly supported by the National Key R&D Program of China (No. 2022ZD0161902), the National Natural Science Foundation of China (No. 62202031), the Beijing Natural Science Foundation (No. 4222049), and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongyu Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, Z., Yang, H. (2025). ArtNeRF: A Stylized Neural Field for 3D-Aware Artistic Face Synthesis. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15325. Springer, Cham. https://doi.org/10.1007/978-3-031-78389-0_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-78389-0_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-78388-3

  • Online ISBN: 978-3-031-78389-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics