Abstract
This paper presents a method for interpreting facial expressions based on temporal structures among partial movements in facial image sequences. To extract the structures, we propose a novel facial expression representation, which we call a facial score, similar to a musical score. The facial score enables us to describe facial expressions as spatio-temporal combinations of temporal intervals; each interval represents a simple motion pattern with the beginning and ending times of the motion. Thus, we can classify fine-grained expressions from multivariate distributions of temporal differences between the intervals in the score. In this paper, we provide a method to obtain the score automatically from input images using bottom-up clustering of dynamics. We evaluate the efficiency of facial scores by comparing the temporal structure of intentional smiles with that of spontaneous smiles.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Allen, J.F.: Maintaining knowledge about temporal intervals. Communications of the ACM 26(11), 832–843 (1983)
Bassili, J.N.: Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance 4(3), 373–379 (1978)
Bregler, C.: Learning and recognizing human dynamics in video sequences. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 568–574 (1997)
Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 484–498. Springer, Heidelberg (1998)
Ekman, P., Friesen, W.V.: Unmasking the Face. Prentice-Hall, Englewood Cliffs (1975)
Essa, I.A., Pentland, A.P.: Facial expression recognition using a dynamic model and motion energy. In: Proc. IEEE Int’l Conference on Computer Vision, pp. 360–367 (1995)
Kawashima, H., Matsuyama, T.: Hierarchical clustering of dynamical systems based on eigenvalue constraints. In: Singh, S., Singh, M., Apte, C., Perner, P. (eds.) ICAPR 2005. LNCS, vol. 3686, pp. 229–238. Springer, Heidelberg (2005)
Li, Y., Wang, T., Shum, H.Y.: Motion texture: A two-level statistical model for character motion synthesis. In: SIGGRAPH, pp. 465–472 (2002)
Nishio, S., Koyama, K., Nakamura, T.: Temporal differences in eye and mouth movements classifying facial expressions of smiles. In: Proc. IEEE Int’l Conference on Automatic Face and Gesture Recognition, pp. 206–211 (1998)
Pinhanez, C., Bobick, A.: Human action detection using pnf propagation of temporal constraints. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 898–904 (1998)
Schmidt, K.L., Cohn, J.F., Tian, Y.-L.: Signal characteristics of spontaneous facial expressions: Automatic movement in solitary and social smiles. Biological Psychology 65, 49–66 (2003)
Stegmann, M.B., Ersboll, B.K., Larsen, R.: FAME - a flexible appearance modelling environment. In: Informatics and Mathematical Modelling, Technical University of Denmark (2003)
Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence 23(2), 97–115 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Nishiyama, M., Kawashima, H., Hirayama, T., Matsuyama, T. (2005). Facial Expression Representation Based on Timing Structures in Faces. In: Zhao, W., Gong, S., Tang, X. (eds) Analysis and Modelling of Faces and Gestures. AMFG 2005. Lecture Notes in Computer Science, vol 3723. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11564386_12
Download citation
DOI: https://doi.org/10.1007/11564386_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-29229-6
Online ISBN: 978-3-540-32074-6
eBook Packages: Computer ScienceComputer Science (R0)