Skip to main content
Log in

Head motion synthesis from speech using deep neural networks

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. We use the same smoothing method in all experiments

References

  1. Allen DM (1971) Mean square error of prediction as a criterion for selecting variables. Technometrics 13(3):469–475

    Article  MATH  Google Scholar 

  2. Busso C, Narayanan SS (2007) Interrelation between speech and facial gestures in emotional utterances: a single subject study. IEEE Trans Audio, Speech, Lang Process 15(8):2331–2347

    Article  Google Scholar 

  3. Busso C, Deng Z, Neumann U, Narayanan S (2005) Natural head motion synthesis driven by acoustic prosodic features. Comput Animat Virtual Worlds 16(3–4):283–290

    Article  Google Scholar 

  4. Busso C, Bulut M, Lee CC, Kazemzadeh A, Mower E, Kim S, Chang J N, Lee S, Narayanan SS (2008) Iemocap: Interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335–359

    Article  Google Scholar 

  5. Ciresan DC, Meier U, Gambardella LM, Schmidhuber J (2010) Deep, big, simple neural nets for handwritten digit recognition. Neural comput 22(12):3207–3220

    Article  Google Scholar 

  6. Cruz-Neira C, Sandin DJ, DeFanti TA, Kenyon RV, Hart JC (1992) The cave: audio visual experience automatic virtual environment. Communications of the ACM 35(6):64–72

    Article  Google Scholar 

  7. Dahl GE, Yu D, Deng L, Acero A (2011) Large vocabulary continuous speech recognition with context-dependent dbn-hmms. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 4688–4691

  8. Dehon C, Filzmoser P, Croux C (2000) Robust methods for canonical correlation analysis. In: Data analysis, classification, and related methods, Springer, pp 321–326

  9. Deng L, Li J, Huang JT, Yao K, Yu D, Seide F, Seltzer M, Zweig G, He X, Williams J et al (2013) Recent advances in deep learning for speech research at microsoft. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8604–8608

  10. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: International conference on artificial intelligence and statistics, pp 249–256

  11. Graf HP, Cosatto E, Strom V, Huang FJ (2002) Visual prosody: Facial movements accompanying speech. In: 5th IEEE international conference on automatic face and gesture recognition, IEEE, pp 396–401

  12. Hinton G (2010) A practical guide to training restricted boltzmann machines. Momentum 9(1):926

    Google Scholar 

  13. Hinton G, Deng L, Yu D, Dahl GE, Ar Mohamed, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T N et al (2012) Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Proc Mag 29(6):82–97

    Article  Google Scholar 

  14. Hinton GE, Salakhutdinov R R (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MATH  MathSciNet  Google Scholar 

  15. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MATH  MathSciNet  Google Scholar 

  16. Hofer G, Shimodaira H (2007) Automatic head motion prediction from speech data. In: INTERSPEECH, pp 722–725

  17. Jia J, Zhang S, Meng F, Wang Y, Cai L (2011) Emotional audio-visual speech synthesis based on PAD. IEEE Trans Audio Speech Language Process 19(3):570–582

    Article  Google Scholar 

  18. Kuratate T, Munhall KG, Rubin P, Vatikiotis-Bateson E, Yehia H (1999) Audio-visual synthesis of talking faces from speech production correlates. In: EuroSpeech

  19. Lattin JM, Carroll JD, Green PE (2003) Analyzing multivariate data. Thomson Brooks/Cole Pacific Grove

  20. Lee Rodgers J, Nicewander WA (1988) Thirteen ways to look at the correlation coefficient. Am Stat 42(1):59–66

    Article  Google Scholar 

  21. Li B, Xie L, Zhu P (2013) Head motion generation for speech driven talking avatar. J Tsinghua Univ (Sci & Tech) 53(6):898–902

    Google Scholar 

  22. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:13013781

  23. Munhall KG, Jones JA, Callan DE, Kuratate T, Vatikiotis-Bateson E (2004) Visual prosody and speech intelligibility head movement improves auditory speech perception. Psychol Sci 15(2):133–137

    Article  Google Scholar 

  24. Reynolds DA, Campbell WM (2008) Text-independent speaker recognition. Springer handbook of speech processing, pp 763–782

  25. Rosenblatt F (1961) Principles of neurodynamics, perceptrons and the theory of brain mechanisms. Technical Report, DTIC Document

  26. Rumelhart DE, Hinton GE, Williams R J (1985) Learning internal representations by error propagation. Technical Report, DTIC Document

  27. Rumelhart DE, Hinton GE (1988) Learning representations by back-propagating errors. MIT Press, Cambridge

    Google Scholar 

  28. Salakhutdinov R, Hinton GE (2009) Deep boltzmann machines. In: International Conference on Artificial Intelligence and Statistics, pp 448–455

  29. Sargin ME, Yemez Y, Erzin E, Tekalp AM (2008) Analysis of head gesture and prosody patterns for prosody-driven head-gesture animation. IEEE Trans Pattern Anal Mach Intell 30(8):1330–1345

    Article  Google Scholar 

  30. Thompson B (2005) Canonical correlation analysis. Encyclopedia of statistics in behavioral science

  31. Tieleman T (2008) Training restricted boltzmann machines using approximations to the likelihood gradient In: Proceedings of the 25th international conference on Machine learning, ACM, pp 1064– 1071

  32. Xie L (2008) Discovering salient prosodic cues and their interactions for automatic story segmentation in Mandarin broadcast news. Multimedia Syst 14(4):237–253

    Article  Google Scholar 

  33. Xie L, Liu Z-Q (2007) Realistic mouth-synching for speech-driven talking face using articulatory modelling. IEEE T Multimedia 9(3):500–510

    Article  Google Scholar 

  34. Xie L, Liu Z-Q (2007) A coupled HMM approach for video-realistic speech animation. Pattern Recogn 40(10):2325–2340

    Article  MATH  MathSciNet  Google Scholar 

  35. Xie L, Sun N, Fan B (2013) A statistical parametric approach to video-realistic text-driven talking avatar. Multimed Tools Appl. doi:10.1007/s11042-013-1633-3

  36. Xiong X, De la Torre F (2013) Supervised descent method and its applications to face alignment. In: IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 532–539

  37. Yehia HC, Kuratate T, Vatikiotis-Bateson E (2002) Linking facial animation, head motion and speech acoustics. J Phon 30(3):555–568

    Article  Google Scholar 

  38. Young S, Evermann G, Kershaw D, Moore G, Odell J, Ollason D, Valtchev V, Woodland P (1997) The HTK book, vol 2. Entropic Cambridge Research Laboratory Cambridge

  39. Yu D, Deng L (2011) Deep learning and its applications to signal and information processing. IEEE Signal Proc Mag 28(1):145–154

    Article  Google Scholar 

  40. Zhang S, Wu Z, Meng HM, Cai L (2007) Facial expression synthesis using pad emotional parameters for a chinese expressive avatar. In: Affective computing and intelligent interaction, Springer, pp 24–35

  41. Zhang S, Wu Z, Meng HM, Cai L (2007) Head movement synthesis based on semantic and prosodic features for a chinese expressive avatar. In: IEEE international conference on acoustics, speech and signal processing 2007, ICASSP 2007, vol 4. IEEE, pp IV–837

  42. Zhao K, Wu Z, Cai L (2013) A real-time speech driven talking avatar based on deep neural network. In: Signal and information processing association annual summit and conference (APSIPA), 2013 Asia-Pacific, IEEE, pp 1–4

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61175018) and the Fok Ying Tung Education Foundation (131059).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Xie.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, C., Xie, L. & Zhu, P. Head motion synthesis from speech using deep neural networks. Multimed Tools Appl 74, 9871–9888 (2015). https://doi.org/10.1007/s11042-014-2156-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-014-2156-2

Keywords

Navigation