Skip to main content
Log in

Emotion information visualization through learning of 3D morphable face model

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Analysis and visualization of human facial expressions and its applications are useful but challenging. This paper presents a novel approach to analyze the facial expressions from images through learning of a 3D morphable face model and a quantitative information visualization scheme for exploring this type of visual data. More specifically, a 3D face database with various facial expressions is employed to build a nonnegative matrix factorization (NMF) part-based morphable 3D face model. From an input image, a 3D face with expression can be reconstructed iteratively by using the NMF morphable 3D face model as a priori knowledge, from which basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two support vector regressions are trained to determine the fuzzy valence–arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detect expressions robustly from various head poses, face sizes and lighting conditions and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate the effectiveness and accuracy of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Ahonen, T., Hadid, A., Pietikainen, M.: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)

    Article  MATH  Google Scholar 

  2. Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)

    Article  Google Scholar 

  3. Bartlett, M.S., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: development and applications to human computer interaction. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop 5, 53–53 (2003)

  4. Beeler, T., Bickel, B., Beardsley, P., Sumner, B., Gross, M.: High-quality single-shot capture of facial geometry. ACM Trans. Graph. 29(4), 40 (2010)

    Article  Google Scholar 

  5. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R.W., Gross, M.: High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30(4), 75 (2011)

    Article  Google Scholar 

  6. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH, pp. 187–194 (1999)

  7. Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063–1074 (2003)

    Article  Google Scholar 

  8. Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. 33(4), 43 (2014)

    Google Scholar 

  9. Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41 (2013)

    Article  MATH  Google Scholar 

  10. Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014)

    Article  Google Scholar 

  11. Cao, C., Wu, H., Weng, Y., Shao, T., Zhou, K.: Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35(4), 126 (2016)

    Article  Google Scholar 

  12. Chen, Y.L., Wu, H.T., Shi, F., Tong, X., Chai, J.: Accurate and robust 3D facial capture using a single RGBD camera. In: IEEE International Conference on Computer Vision, pp. 3615–3622 (2013)

  13. Cohen, I., Garg, A., Huang, T.S., et al.: Emotion recognition from facial expressions using multilevel HMM. In: Neural Information Processing Systems, vol. 2 (2000)

  14. Cootes, T.F., Edwards, G.J., Taylor, C.J., et al.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Article  Google Scholar 

  15. Cootes, T.F., Wheeler, G.V., Walker, K.N., Taylor, C.J.: View-based active appearance models. Image Vis. Comput. 20(9), 657–664 (2002)

    Article  Google Scholar 

  16. Cristinacce, D., Cootes, T.F.: Feature detection and tracking with constrained local models. In: British Machine Vision Conference, vol. 1, p. 3 (2006)

  17. Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  18. Fanelli, G., Dantone, M., Gall, J., Fossati, A., Van Gool, L.: Random forests for real time 3D face analysis. Int. J. Comput. Vis. 101(3), 437–458 (2013)

    Article  Google Scholar 

  19. Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003)

    Article  MATH  Google Scholar 

  20. Geng, X., Zhou, Z.H., Smith-Miles, K.: Automatic age estimation based on facial aging patterns. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2234–2240 (2007)

    Article  Google Scholar 

  21. Granger, S., Pennec, X.: Multi-scale EM-ICP: a fast and robust approach for surface registration. In: International Conference on Computer Vision, pp. 418–432 (2002)

  22. Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emotions. 1(1), 68–99 (2010)

    Article  Google Scholar 

  23. Guo, X., Hua, J., Qin, H.: Scalar-function-driven editing on point set surfaces. IEEE Comput. Graph. Appl. 24(4), 43–52 (2004)

    Article  Google Scholar 

  24. Guo, X., Hua, J., Qin, H.: Touch-based haptics for interactive editing on point set surfaces. IEEE Comput. Graph. Appl. 24(6), 31–39 (2004)

    Article  Google Scholar 

  25. Ioannou, S.V., Raouzaiou, A.T., Tzouvaras, V.A., Mailis, T.P., Karpouzis, K.C., Kollias, S.D.: Emotion recognition through facial expression analysis based on a neurofuzzy network. Neural Netw. 18(4), 423–435 (2005)

    Article  Google Scholar 

  26. Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: SIGGRAPH, p. 8 (2005)

  27. Kapoor, A., Burleson, W., Picard, R.W.: Automatic prediction of frustration. Int. J. Human Comput. Stud. 65(8), 724–736 (2007)

    Article  Google Scholar 

  28. Ko, K.E., Sim, K.B.: Development of the facial feature extraction and emotion recognition method based on ASM and bayesian network. In: IEEE International Conference on Fuzzy Systems, pp. 2063–2066 (2009)

  29. Kobayashi, H., Hara, F.: Facial interaction between animated 3D face robot and human beings. IEEE International Conference on Computational Cybernetics and Simulation 4, 3732–3737 (1997)

    Google Scholar 

  30. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

  31. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Advances in Neural Information Processing Systems, pp. 556–562 (2001)

  32. Lei, Z., Bai, Q., He, R., Li, S.: Face shape recovery from a single image using cca mapping between tensor spaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008)

  33. Liao, Q., Jin, X., Zeng, W.: Enhancing the symmetry and proportion of 3D face geometry. IEEE Trans. Vis. Comput. Graph. 18(10), 1704–1716 (2012)

    Article  Google Scholar 

  34. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 94–101 (2010)

  35. Morency, L.P., Rahimi, A., Darrell, T.: Adaptive view-based appearance models. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-803. IEEE (2003)

  36. Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)

  37. Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011)

    Article  Google Scholar 

  38. Sandbach, G., Zafeiriou, S., Pantic, M., Rueckert, D.: Recognition of 3D facial expression dynamics. Image Vis. Comput. 30(10), 762–773 (2012)

    Article  Google Scholar 

  39. Schuller, B., Valster, M., Eyben, F., Cowie, R., Pantic, M.: Avec 2012: the continuous audio/visual emotion challenge. In: ACM International Conference on Multimodal Interaction, pp. 449–456 (2012)

  40. Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25(12), 1856–1863 (2007)

    Article  Google Scholar 

  41. Suwajanakorn, S., Kemelmacher-Shlizerman, I., Seitz, S.M.: Total moving face reconstruction. In: European Conference on Computer Vision, pp. 796–812 (2014)

  42. Tena, J.R., De la Torre, F., Matthews, I.: Interactive region-based linear 3D face models. ACM. Trans. Graph. 30(4), 76 (2011)

    Article  Google Scholar 

  43. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991)

  44. Valstar, M., Schuller, B., Smith, K., Almaev, T., Eyben, F., Krajewski, J., Cowie, R., Pantic, M.: Avec 2014: 3D dimensional affect and depression recognition challenge. In: International Workshop on Audio/Visual Emotion Challenge, pp. 3–10 (2014)

  45. Valstar, M.F., Pantic, M.: Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In: International Workshop on Human-Computer Interaction, pp. 118–127 (2007)

  46. Wang, H., Ahuja, N.: Facial expression decomposition. In: IEEE International Conference on Computer Vision, pp. 958–965 (2003)

Download references

Acknowledgements

We would like to thank the reviewers for their valuable suggestions which helped to improve this paper. This work is supported in part by the following grants: LZ16F020002 and NSF CNS-1647200.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Hua.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, H., Wang, X., Lian, Y. et al. Emotion information visualization through learning of 3D morphable face model. Vis Comput 35, 535–548 (2019). https://doi.org/10.1007/s00371-018-1482-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1482-1

Keywords

Navigation