Abstract
Analysis and visualization of human facial expressions and its applications are useful but challenging. This paper presents a novel approach to analyze the facial expressions from images through learning of a 3D morphable face model and a quantitative information visualization scheme for exploring this type of visual data. More specifically, a 3D face database with various facial expressions is employed to build a nonnegative matrix factorization (NMF) part-based morphable 3D face model. From an input image, a 3D face with expression can be reconstructed iteratively by using the NMF morphable 3D face model as a priori knowledge, from which basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two support vector regressions are trained to determine the fuzzy valence–arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detect expressions robustly from various head poses, face sizes and lighting conditions and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate the effectiveness and accuracy of our method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Ahonen, T., Hadid, A., Pietikainen, M.: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)
Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)
Bartlett, M.S., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: development and applications to human computer interaction. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop 5, 53–53 (2003)
Beeler, T., Bickel, B., Beardsley, P., Sumner, B., Gross, M.: High-quality single-shot capture of facial geometry. ACM Trans. Graph. 29(4), 40 (2010)
Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R.W., Gross, M.: High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30(4), 75 (2011)
Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH, pp. 187–194 (1999)
Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063–1074 (2003)
Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. 33(4), 43 (2014)
Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41 (2013)
Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014)
Cao, C., Wu, H., Weng, Y., Shao, T., Zhou, K.: Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35(4), 126 (2016)
Chen, Y.L., Wu, H.T., Shi, F., Tong, X., Chai, J.: Accurate and robust 3D facial capture using a single RGBD camera. In: IEEE International Conference on Computer Vision, pp. 3615–3622 (2013)
Cohen, I., Garg, A., Huang, T.S., et al.: Emotion recognition from facial expressions using multilevel HMM. In: Neural Information Processing Systems, vol. 2 (2000)
Cootes, T.F., Edwards, G.J., Taylor, C.J., et al.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)
Cootes, T.F., Wheeler, G.V., Walker, K.N., Taylor, C.J.: View-based active appearance models. Image Vis. Comput. 20(9), 657–664 (2002)
Cristinacce, D., Cootes, T.F.: Feature detection and tracking with constrained local models. In: British Machine Vision Conference, vol. 1, p. 3 (2006)
Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3–4), 169–200 (1992)
Fanelli, G., Dantone, M., Gall, J., Fossati, A., Van Gool, L.: Random forests for real time 3D face analysis. Int. J. Comput. Vis. 101(3), 437–458 (2013)
Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003)
Geng, X., Zhou, Z.H., Smith-Miles, K.: Automatic age estimation based on facial aging patterns. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2234–2240 (2007)
Granger, S., Pennec, X.: Multi-scale EM-ICP: a fast and robust approach for surface registration. In: International Conference on Computer Vision, pp. 418–432 (2002)
Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emotions. 1(1), 68–99 (2010)
Guo, X., Hua, J., Qin, H.: Scalar-function-driven editing on point set surfaces. IEEE Comput. Graph. Appl. 24(4), 43–52 (2004)
Guo, X., Hua, J., Qin, H.: Touch-based haptics for interactive editing on point set surfaces. IEEE Comput. Graph. Appl. 24(6), 31–39 (2004)
Ioannou, S.V., Raouzaiou, A.T., Tzouvaras, V.A., Mailis, T.P., Karpouzis, K.C., Kollias, S.D.: Emotion recognition through facial expression analysis based on a neurofuzzy network. Neural Netw. 18(4), 423–435 (2005)
Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: SIGGRAPH, p. 8 (2005)
Kapoor, A., Burleson, W., Picard, R.W.: Automatic prediction of frustration. Int. J. Human Comput. Stud. 65(8), 724–736 (2007)
Ko, K.E., Sim, K.B.: Development of the facial feature extraction and emotion recognition method based on ASM and bayesian network. In: IEEE International Conference on Fuzzy Systems, pp. 2063–2066 (2009)
Kobayashi, H., Hara, F.: Facial interaction between animated 3D face robot and human beings. IEEE International Conference on Computational Cybernetics and Simulation 4, 3732–3737 (1997)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Advances in Neural Information Processing Systems, pp. 556–562 (2001)
Lei, Z., Bai, Q., He, R., Li, S.: Face shape recovery from a single image using cca mapping between tensor spaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008)
Liao, Q., Jin, X., Zeng, W.: Enhancing the symmetry and proportion of 3D face geometry. IEEE Trans. Vis. Comput. Graph. 18(10), 1704–1716 (2012)
Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 94–101 (2010)
Morency, L.P., Rahimi, A., Darrell, T.: Adaptive view-based appearance models. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-803. IEEE (2003)
Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)
Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011)
Sandbach, G., Zafeiriou, S., Pantic, M., Rueckert, D.: Recognition of 3D facial expression dynamics. Image Vis. Comput. 30(10), 762–773 (2012)
Schuller, B., Valster, M., Eyben, F., Cowie, R., Pantic, M.: Avec 2012: the continuous audio/visual emotion challenge. In: ACM International Conference on Multimodal Interaction, pp. 449–456 (2012)
Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25(12), 1856–1863 (2007)
Suwajanakorn, S., Kemelmacher-Shlizerman, I., Seitz, S.M.: Total moving face reconstruction. In: European Conference on Computer Vision, pp. 796–812 (2014)
Tena, J.R., De la Torre, F., Matthews, I.: Interactive region-based linear 3D face models. ACM. Trans. Graph. 30(4), 76 (2011)
Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991)
Valstar, M., Schuller, B., Smith, K., Almaev, T., Eyben, F., Krajewski, J., Cowie, R., Pantic, M.: Avec 2014: 3D dimensional affect and depression recognition challenge. In: International Workshop on Audio/Visual Emotion Challenge, pp. 3–10 (2014)
Valstar, M.F., Pantic, M.: Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In: International Workshop on Human-Computer Interaction, pp. 118–127 (2007)
Wang, H., Ahuja, N.: Facial expression decomposition. In: IEEE International Conference on Computer Vision, pp. 958–965 (2003)
Acknowledgements
We would like to thank the reviewers for their valuable suggestions which helped to improve this paper. This work is supported in part by the following grants: LZ16F020002 and NSF CNS-1647200.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Jin, H., Wang, X., Lian, Y. et al. Emotion information visualization through learning of 3D morphable face model. Vis Comput 35, 535–548 (2019). https://doi.org/10.1007/s00371-018-1482-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-018-1482-1