Skip to main content

Audio-Driven Face Photo-Sketch Video Generation

  • Conference paper
  • First Online:
PRICAI 2024: Trends in Artificial Intelligence (PRICAI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 15283))

Included in the following conference series:

  • 205 Accesses

Abstract

In recent years, with the rapid development of AIGC technologies, significant progress has been made in the field of face photo-sketch synthesis, which plays a crucial role in law enforcement and digital entertainment. However, existing research tends to focus solely on facial information while neglecting accompanying audio information, potentially leading to the omission of crucial information about the identity. Due to the modality gap between face photos and sketches, directly applying existing audio-driven video generation approaches usually yeild poor performance. To this end, we propose a novel method for audio-driven face photo-sketch video generation. Our method integrates sketch portrait generation, audio feature extraction, joint optimization of expression and pose networks, and 3D facial rendering, which implements realistic facial expressions and head poses generation sensitive to audio in sketch style. To enhance the naturalness and clarity of the generated face photo-sketch videos, we further design a sketch portrait embedding method that optimally integrates face photo-sketch synthesis into a conventional audio-driven model for sketch video generation. Extensive experiments show that our method outperforms existing methods in both qualitative and quantitative evaluations.

S. Zhou and Q. Guan—Contribute equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: Seminal Graphics Papers: Pushing the Boundaries, vol. 2, pp. 157–164 (2023)

    Google Scholar 

  2. Chen, L., Maddox, R.K., Duan, Z., Xu, C.: Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 7832–7841 (2019)

    Google Scholar 

  3. Cheng, K., et al.: Videoretalking: audio-based lip synchronization for talking head video editing in the wild. In: SIGGRAPH Asia 2022 Conference Papers, pp. 1–9 (2022)

    Google Scholar 

  4. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690–4699 (2019)

    Google Scholar 

  5. Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3d face reconstruction with weakly-supervised learning: from single image to image set. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  6. Doersch, C.: Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908 (2016)

  7. Garrido, P., et al.: Vdub: modifying face video of actors for plausible visual alignment to a dubbed audio track. In: Computer Graphics Forum, vol. 34, pp. 193–204. Wiley Online Library (2015)

    Google Scholar 

  8. Goodfellow, I., et al.: Generative adversarial nets. Advances in neural information processing systems 27 (2014)

    Google Scholar 

  9. Ji, X., et al.: Eamm: one-shot emotional talking face via audio-based emotion-aware motion model. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022)

    Google Scholar 

  10. Karras, T., Aila, T., Laine, S., Herva, A., Lehtinen, J.: Audio-driven facial animation by joint end-to-end learning of pose and emotion. ACM Trans. Graph. (TOG) 36(4), 1–12 (2017)

    Article  Google Scholar 

  11. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  12. KR, P., Mukhopadhyay, R., Philip, J., Jha, A., Namboodiri, V., Jawahar, C.: Towards automatic face-to-face translation. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1428–1436 (2019)

    Google Scholar 

  13. Liu, P., Yu, H., Cang, S.: Adaptive neural network tracking control for underactuated systems with matched and mismatched disturbances. Nonlinear Dyn. 98(2), 1447–1464 (2019)

    Article  Google Scholar 

  14. Lu, Y., Wu, S., Tai, Y.W., Tang, C.K.: Image generation from sketch constraint using contextual gan. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 205–220 (2018)

    Google Scholar 

  15. Narvekar, N.D., Karam, L.J.: A no-reference image blur metric based on the cumulative probability of blur detection (cpbd). IEEE Trans. Image Process. 20(9), 2678–2683 (2011)

    Article  MathSciNet  Google Scholar 

  16. Prajwal, K., Mukhopadhyay, R., Namboodiri, V.P., Jawahar, C.: A lip sync expert is all you need for speech to lip generation in the wild. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 484–492 (2020)

    Google Scholar 

  17. Sun, L., Zhao, C., Yan, Z., Liu, P., Duckett, T., Stolkin, R.: A novel weakly-supervised approach for rgb-d-based nuclear waste object detection. IEEE Sens. J. 19(9), 3487–3500 (2018)

    Article  Google Scholar 

  18. Tang, X., Wang, X.: Face photo recognition using sketch. In: Proceedings. International Conference on Image Processing, vol. 1, pp. I–I. IEEE (2002)

    Google Scholar 

  19. Tang, Z.c., Li, C., Wu, J.f., Liu, P.c., Cheng, S.w.: Classification of eeg-based single-trial motor imagery tasks using a b-csp method for bci. Front. Inf. Technol. Electronic Eng. 20(8), 1087–1098 (2019)

    Google Scholar 

  20. Tang, Z., Yu, H., Lu, C., Liu, P., Jin, X.: Single-trial classification of different movements on one arm based on erd/ers and corticomuscular coherence. IEEE Access 7, 128185–128197 (2019)

    Article  Google Scholar 

  21. Thies, J., Elgharib, M., Tewari, A., Theobalt, C., Nießner, M.: Neural voice puppetry: Audio-driven facial reenactment. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16, pp. 716–731. Springer (2020)

    Google Scholar 

  22. Wang, L., Sindagi, V., Patel, V.: High-quality facial photo-sketch synthesis using multi-adversarial networks. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 83–90. IEEE (2018)

    Google Scholar 

  23. Wang, S., Li, L., Ding, Y., Fan, C., Yu, X.: Audio2head: audio-driven one-shot talking-head generation with natural head motion. arXiv preprint arXiv:2107.09293 (2021)

  24. Wang, T.C., Mallya, A., Liu, M.Y.: One-shot free-view neural talking-head synthesis for video conferencing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10039–10049 (2021)

    Google Scholar 

  25. Wang, X., Tang, X.: Face photo-sketch synthesis and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(11), 1955–1967 (2008)

    Article  Google Scholar 

  26. Yang, Z.L., Guo, X.Q., Chen, Z.M., Huang, Y.F., Zhang, Y.J.: Rnn-stega: linguistic steganography based on recurrent neural networks. IEEE Trans. Inf. Forensics Secur. 14(5), 1280–1295 (2018)

    Article  Google Scholar 

  27. Yehia, H., Rubin, P., Vatikiotis-Bateson, E.: Quantitative association of vocal-tract and facial behavior. Speech Commun. 26(1–2), 23–43 (1998)

    Article  Google Scholar 

  28. Zhang, W., et al.: Sadtalker: learning realistic 3d motion coefficients for stylized audio-driven single image talking face animation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8652–8661 (2023)

    Google Scholar 

  29. Zhang, Z., Li, L., Ding, Y., Fan, C.: Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3661–3670 (2021)

    Google Scholar 

  30. Zhou, H., Sun, Y., Wu, W., Loy, C.C., Wang, X., Liu, Z.: Pose-controllable talking face generation by implicitly modularized audio-visual representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4176–4186 (2021)

    Google Scholar 

  31. Zhou, Y., Han, X., Shechtman, E., Echevarria, J., Kalogerakis, E., Li, D.: Makelttalk: speaker-aware talking-head animation. ACM Trans. Graph. (TOG) 39(6), 1–15 (2020)

    Google Scholar 

  32. Zhu, M., Wu, Z., Wang, N., Yang, H., Gao, X.: Dual conditional normalization pyramid network for face photo-sketch synthesis. IEEE Trans. Circuits Syst. Video Technol. 33(9), 5200–5211 (2023)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62276198, Grant U22A2035, Grant U22A2096, Grant 62441601 and Grant 62306227; in part by the Key Research and Development Program of Shaanxi (Program No. 2023-YBGY-231); in part by Young Elite Scientists Sponsorship Program by CAST under Grant 2022QNRC001; in part by the Guangxi Natural Science Foundation Program under Grant 2021GXNSFDA075011; in part by Open Research Project of Key Laboratory of Artificial Intelligence Ministry of Education under Grant AI202401, and in part by the Fundamental Research Funds for the Central Universities under Grant QTZX23083, Grant QTZX23042, and Grant ZYTS24142.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunlei Peng .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, S., Guan, Q., Peng, C., Liu, D., Zheng, Y. (2025). Audio-Driven Face Photo-Sketch Video Generation. In: Hadfi, R., Anthony, P., Sharma, A., Ito, T., Bai, Q. (eds) PRICAI 2024: Trends in Artificial Intelligence. PRICAI 2024. Lecture Notes in Computer Science(), vol 15283. Springer, Singapore. https://doi.org/10.1007/978-981-96-0122-6_38

Download citation

  • DOI: https://doi.org/10.1007/978-981-96-0122-6_38

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-96-0121-9

  • Online ISBN: 978-981-96-0122-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics