ABSTRACT
The process of generating facial models and various poses of these models is a necessary part of most present-day movies, and usually required for any interactive game which features humans as a primary character. The generation of this face data can be approached in ways varying from pure computation to pure data acquisition. Computational models are flexible but can lack realism and intuitive or simple controls, while data-driven models produce realistic faces, but necessitate the often slow and cumbersome capture of new scan data for every desired set of face attributes. Our method is a hybrid approach, which combines a relatively small set of real world facial data with a computational algorithm that learns the underlying variations in this geometric information automatically. Given a sparse data set that spans variation in viseme, face type, and expression, we are able to generate new faces that exhibit combinations of these attributes, and were never part of the original data set. We rely on user-assisted categorization of our sparse data set to associate each piece of face data with a small set of attribute contributions, and then use this categorization data as a guide for binding abstract variation to concrete parameters. This process takes the complex, subtle, and often subjective qualities associated with visemes, expressions, and face types, and correlates them to known geometric features, in order to facilitate the creation of entirely new face poses.
- Allen, B., Curless, B., and Popović, Z. 2004. Exploring the space of human body shapes: data-driven synthesis under anthropometric control. In Proc. Digital Human Modeling for Design and Engineering Conference. SAE International.Google Scholar
- Blanz, V. and T. A. Vetter. 1999. Morphable Model for the Synthesis of 3D Faces. In Proc. of ACM SIGGRAPH 1999. 187--194. Google ScholarDigital Library
- Zhang, L., et al. 2004. Spacetime faces: high resolution capture for modeling and animation. In Proc. of ACM SIGGRAPH 2004. 548--558. Google ScholarDigital Library
Recommendations
Expression-invariant face recognition by facial expression transformations
In this paper, we present a method of expression-invariant face recognition that transforms input face image with an arbitrary expression into its corresponding neutral facial expression image. When a new face image with an arbitrary expression is ...
Recognizing action units for facial expression analysis
Multimodal interface for human-machine communicationMost automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
Recognizing Action Units for Facial Expression Analysis
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
Comments