Skip to main content
Log in

Emotional head motion predicting from prosodic and linguistic features

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Emotional head motion plays an important role in human-computer interaction (HCI), which is one of the important factors to improve users’ experience in HCI. However, it is still not clear how head motions are influenced by speech features in different emotion states. In this study, we aim to construct a bimodal mapping model from speech to head motions, and try to discover what kinds of prosodic and linguistic features have the most significant influence on emotional head motions. A two-layer clustering schema is introduced to obtain reliable clusters from head motion parameters. With these clusters, an emotion related speech to head gesture mapping model is constructed by a Classification and Regression Tree (CART). Based on the statistic results of CART, a systematical statistic map of the relationship between speech features (including prosodic and linguistic features) and head gestures is presented. The map reveals the features which have the most significant influence on head motions in long or short utterances. We also make an analysis on how linguistic features contribute to different emotional expressions. The discussions in this work provide important references for realistic animation of speech driven talking-head or avatar.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Alberto B, Piero C, Giuseppe RL, Giulio P (2014) LuciaWebGL a new WebGL-based talking head, 15th Annual Conference of the International Speech Communication Association, Singapore (InterSpeech 2014 Show & Tell Contribution)

  2. Aleksandra C, Tomislav P, Pandzic IS (2009) RealActor: character animation and multimodal behavior realization system. IVA: 486–487

  3. Ananthakrishnan S, Narayanan S (2008) Automatic prosodic event detection using acoustic, lexical, and syntactic evidence. IEEE Trans Audio Speech Lang Process 16(1):216–228

    Article  Google Scholar 

  4. Badler N, Steedman M, Achorn B, Bechet T, Douville B, Prevost S, Cassell J, Pelachaud C, Stone M (1994) Animated conversation: rule-based generation of facial expression gesture and spoken intonation for multiple conversation agents. Proceedings of SIGGRAPH, 73–80

  5. Ben-Youssef A, Shimodaira H, Braude DA (2014) Speech driven talking head from estimated articulatory features, The 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014), Florence, Italy

  6. Bevacqua E, Hyniewska SJ, Pelachaud C (2010) Evaluation of a virtual listener smiling behavior. Proceedings of the 23rd International Conference on Computer Animation and Social Agents, Saint-Malo, France

  7. Bo X, Georgiou Panayiotis G, Brian Baucom, Shrikanth S (2014) Narayanan, power-spectral analysis of head motion signal for behavioral modeling in human interaction, 2014 I.E. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014), Florence, Italy

  8. Bodenheimer B, Rose C, Rosenthal S, Pella J (1997) The process of motion capture: dealing with the data. In: Thalmann (ed) Computer animation and simulation. Springer NY 318 Eurographics Animation Workshop

  9. Boulic R, Becheiraz P, Emering L, Thalmann D (1997) Integration of motion control techniques for virtual human and avatar real-time animation. In: Proc. of Virtual Reality Software and Technology, Switzerland: 111–118

  10. Busso C, Deng Z, Neumann U, Narayanan S (2005) Natural head motion synthesis driven by acoustic prosodic features. Comput Anim Virtual Worlds 16(3–4):283–290

    Article  Google Scholar 

  11. Cassell J, Vilhjalmsson HH, Bickmore TW (2001) Beat: the behavior expression animation toolkit. In: Proceedings of SIGGRAPH, 477–486

  12. Chuang D, Pengcheng Z, Lei X, Dongmei J, ZhongHua Fu (2014) Northwestern, speech-driven head motion synthesis using neural networks, 15th Annual Conference of the International Speech Communication Association, Singapore (InterSpeech 2014)

  13. Cohn JF, Schmidt KL (2004) The timing of facial motion in posed and spontaneous smiles. Int J Wavelets Multiresolution Inf Process 2:1–12

    Article  Google Scholar 

  14. Cowie R, Douglas-Cowie E (2001) Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine. pp. 33–80

  15. de Rosis F, Pelachaud C, Poggi I, Carofiglio V, De Carolis N (2003) From Greta’s mind to her face: modeling the dynamics of affective states in a conversational embodied agent, special issue on applications of affective computing in human-computer interaction. Int J Hum Comput Stud 59(1–2):81–118

    Article  Google Scholar 

  16. Faloutsos P, van de Panne M, Terzopoulos D (2001) Composable controllers for physics-based character animation. In: SIGGRAPH ‘01: proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM Press, New York, p 251–260

  17. Fangzhou L, Huibin J, Jianhua T (2008) A maximum entropy based hierarchical model for automatic prosodic boundary labeling in Mandarin. In: Proceedings of 6th International Symposium on Chinese Spoken Language Processing

  18. Graf HP, Cosatto E, Strom V, Huang F (2002) Visual prosody: facial movements accompanying speech. In: Fifth IEEE International Conference on Automatic Face and Gesture Recognition. Washinton D.C., USA

  19. Hong P, Wen Z, Huang TS (2002) Real-time speech-driven face animation with expressions using neural networks. IEEE Trans Neural Netw 13:916–927

    Article  Google Scholar 

  20. Huibin J, Jianhua T, Wang X (2008) Prosody variation: application to automatic prosody evaluation of mandarin speech. In: Speech prosody, Brail

  21. Jia J, Shen Z, Fanbo M, Yongxin W, Lianhong C (2011) Emotional audio-visual speech synthesis based on PAD. IEEE Trans Audio Speech Lang Process 19(3):570–582

    Article  Google Scholar 

  22. Jianwu D, Kiyoshi H (Feb 2004) Construction and control of a physiological articulatory model. J Acoust Soc Am 115(2):853–870

  23. Kipp M, Heloir A, Gebhard P, Schroeder M (2010) Realizing multimodal behavior: closing the gap between behavior planning and embodied agent presentation. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents. Springer

  24. Kopp S, Jung B, Lebmann N, Wachsmuth (2003) I: Max - a multimodal assistant in virtual reality construction. KI -Kunstliche Intelligenz 4/03 117

  25. Kopp S, Wachsmuth I (2004) Synthesizing multimodal utterances for conversational agents. Comput Anim Virtual Worlds 15(1):39–52

    Article  Google Scholar 

  26. Lei X, Zhiqiang L (2007) A coupled HMM approach for video-realistic speech animation. Pattern Recogn 40(10):2325–2340

    MATH  Google Scholar 

  27. Lei X, Zhiqiang L (2007) Realistic mouth-synching for speech-driven talking face using articulatory modelling. IEEE Trans Multimedia 9(3):500–510

    Article  Google Scholar 

  28. Lijuan W, Xiaojun Q, Wei H, Frank KS (2010) Synthesizing photo-real talking head via trajectory-guided sample selection. INTERSPEECH 2010

  29. Martin JC, Niewiadomski R, Devillers L, Buisine S, Pelachaud C (2006) Multimodal complex emotions: gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, special issue Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids, 3(3): 269–292

  30. Meng Z, Kaihui M, Jianhua T (2008) An expressive TTVS system based on dynamic unit selection. J Syst Simul 20(z1):420–422

    Google Scholar 

  31. Parke F (1972) Computer generated animation of faces. Proceedings of the ACM National Conference

  32. Pelachaud (2009) Modelling multimodal expression of emotion in a virtual agent. Philos Trans R Soc B Biol Sci 364:3539–3548

    Article  Google Scholar 

  33. Scott AK, Parent RE (2005) Creating speech-synchronized animation. IEEE Trans Vis Comput Graph 11(3):341–352

    Article  Google Scholar 

  34. Shao Y, Han J, Zhao Y, Liu T (2007) Study on automatic prediction of sentential stress for Chinese Putonghua Text-to-Speech system with natural style. Chin J Acoust 26(1):49–92

    Google Scholar 

  35. Shiwen Y, Xuefeng Z, Huiming D (2000) The guideline for segmentation and part-of-speech tagging on very large scale corpus of contemporary Chinese. J Chin Inf Process 6:58–64

    Google Scholar 

  36. Shiwen Y, Xuefeng Z, Huiming D (2002) The basic processing of contemporary Chinese corpus at Peking University SPECIFICATION 16(6)

  37. Song M, Bu J, Chen C, Li N (2004) Audio-visual based emotion recognition- a new approach. In: Proc. of the 2004 I.E. Computer Society Conference on Computer Vision and Pattern Recognition. pp.1020–1025

  38. Stone M, DeCarlo D, Oh I, Rodriguez C, Stere A, Lees A, Bregler C (2004) Speaking with hands: creating animated conversational characters from recordings of human perfor- mance. ACM Trans Graph (SIGGRAPH‘04) 23(3):506–51

  39. Tony E, Poggio T (2000) Visual speech synthesis by morphing visemes. Int J Comput Vis 38:45–57

    Article  MATH  Google Scholar 

  40. Wachsmuth (2008) ‘I, Max’ - communicating with an artificial agent. In: Wachsmuth I, Knoblich G (eds) Modeling communication with robots and virtual humans. Springer, Berlin, pp 279–295

    Chapter  Google Scholar 

  41. Wang QR, Suen CY (1984) Analysis and design of a decision tree based on entropy reduction and its application to large character set recognition. IEEE Trans Pattern Anal Mach Intell, PAMI 6: 406–417

  42. Waters K (1987) A musele model for animating three dimensional facial ExPression. Computer Graphics (SIGGRAPH,87) 22(4): 7–24

  43. Wei Z, Zengfu W (2009) Speech rate related facial animation synthesis and evaluation. J Image Graph 14(7):1399–1405

    Google Scholar 

  44. Welbergen HV, Reidsma D, Ruttkay ZM, Zwiers EJ (2010) A BML realizer for continuous, multimodal interaction with a virtual human. J Multimodal User Interf 3(4):271–284, ISSN 1783–7677

    Article  Google Scholar 

  45. Yamamoto E, Nakamura S, Shikano K (1998) Lip movement synthesis from speech based on Hidden Markov Models. Speech Comm 26(1–2):105–115

    Article  Google Scholar 

  46. Yamamoto SNE, Shikano K (1997) Speech to lip movement synthesis by HMM. In: Proc.AVSP‘97. Rhodes, Greece

  47. Young S, Evermann G, Kershaw D, Moore G, Odell J, Ollason D, Povey D, Valtchev V, Woodland P (2002) The HTK book (for HTK version 3.2). Cambridge University Engineering Department

  48. Young S, Jansen J, Odell J, Ollason D, Woodland P (1990) The HTK book. Entropic Labs and Cambridge University, 2.1

  49. Zeng Z, Pantic M, Roisman GI, Huang TS (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58

    Article  Google Scholar 

  50. Zeng Z, Tu J, Liu M, Huang TS, Pianfetti B, Roth D, Levinson S (2007) Audiovisual affect recognition. IEEE Trans Multimedia 9(2):424–428

    Article  Google Scholar 

  51. Zhang S, Wu Z, Meng MLH, Cai L (2007) Head movement synthesis based on semantic and prosodic features for a Chinese expressive avatar. In: IEEE Conference on International Conference on Acoustics, Speech and Signal Processing

  52. Zhenhua L, Richmond K, Yamagishi J (2010) An analysis of HMM-based prediction of articulatory movements. Speech Comm 52(10):834–846

    Article  Google Scholar 

Download references

Acknowledgments

This work is supported by the National High-Tech Research and Development Program of China (863 Program) (No.2015AA016305), the National Natural Science Foundation of China (NSFC) (No.61332017, No.61375027, No.61203258, No.61273288,No.61233009, No.61425017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghao Yang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, M., Jiang, J., Tao, J. et al. Emotional head motion predicting from prosodic and linguistic features. Multimed Tools Appl 75, 5125–5146 (2016). https://doi.org/10.1007/s11042-016-3405-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3405-3

Keywords

Navigation