Abstract
Human facial expression modeling and synthesis have recently become a very active area of research. This is partially due to its potential application in model-based image coding, as well as the possibility of using it to enhance human-computer interactions. The majority of the research that has been done in this area has focused on facial expression analysis, modeling and synthesis. Although good results have been obtained in analysis and synthesis, not much effort has been spent on attempting to synthesize facial images that are natural looking. In this paper, we describe our research on facial expression modeling and synthesis. We propose an iterative framework that uses a genetic algorithm to synthesize natural looking facial images. Facial expression representation and distortion measures will also be discussed. Preliminary results are presented.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
P. Ekman and W.V. Friesen, “Facial Action Coding System”, Consulting Psychologists Press, 1977
A. Watt and M. Watt, “Advanced Animation and Rendering Techniques”, Addison-Wesley, 1992
A. Peng and M. Hayes, “Modeling Human Facial Expressions at Multiple Resolutions”, ICASSP95, pp. 2627–2629
Z. Michalewicz, “Genetic Algorithms+Data Structures — Evolution Programs”, Springer-Verlag, 1994
S. Curinga, A. Grattarola and F. Lavagetto, “Synthesis and Animation of Human Faces: Artificial Reality in Interpersonal Video Communication”, pp. 397–40
C.S. Choi, K. Aizawa, H. Harashima and T. Takebe, “Analysis and Synthesis of Facial Image Sequences in Model-Based Image Coding”, IEEE-Trans. on Circuits and Systems for Video Tech., Vol. 4, No. 3, June, 1994, pp. 257–274
F.I. Parke, “A model for human faces that allows speech synchronized animation”, Computer Graphics, Vol. 1, pp. 3–4 (1975)
F.I. Parke, Parameterized models for facial animation”, IEEE Computer Graphics Applications, Vol. 2, no. 9, pp. 61–68 (1982)
D. Hill, A. Pearce, and B. Wyvill, “Animating speech: A automated approach using speech synthesized by rules”, Visual Computer, Vol. 3, pp. 277–287 (1988)
N. Magnenat-Thalmann, E. Primeau, and D. Thalmann, “Abstract muscle action procedures for face animation”, Visual Computer, Vol. 3, pp. 290–297(1988)
D. Terzopoulous, K. Waters, “Analysis and synthesis of facial image sequences using physical and anatomical models”, IEEE Trans-PAMI, Vol. 15, No. 6, pp. 69–579 (1993)
S. Morishima, K. Aizawa, and H. Harashima, “A real-time facial action image synthesis system driven by speech and text”, SPIE Vol. 1360 Visual Comm. Image Proc, pp. 1151–1158 (1990)
S. Morishima, K. Aizawa, and H. Harashima, “An intelligent facial image coding driven by speech and phoneme”, IEEE ICASSP89, Pres. No.M8.7, pp. 1795–1798 (1989)
L. Tang and T.S. Huang, “Analysis-Based Facial Expression Synthesis”, pp. 98–102, ICIP94
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Peng, A., Hayes, M.H. (1995). Iterative human facial expression modeling. In: Chin, R.T., Ip, H.H.S., Naiman, A.C., Pong, TC. (eds) Image Analysis Applications and Computer Graphics. ICSC 1995. Lecture Notes in Computer Science, vol 1024. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60697-1_137
Download citation
DOI: https://doi.org/10.1007/3-540-60697-1_137
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60697-0
Online ISBN: 978-3-540-49298-6
eBook Packages: Springer Book Archive