skip to main content
research-article

Rigid stabilization of facial expressions

Published: 27 July 2014 Publication History

Abstract

Facial scanning has become the industry-standard approach for creating digital doubles in movies and video games. This involves capturing an actor while they perform different expressions that span their range of facial motion. Unfortunately, the scans typically contain a superposition of the desired expression on top of un-wanted rigid head movement. In order to extract true expression deformations, it is essential to factor out the rigid head movement for each expression, a process referred to as rigid stabilization. In order to achieve production-quality in industry, face stabilization is usually performed through a tedious and error-prone manual process. In this paper we present the first automatic face stabilization method that achieves professional-quality results on large sets of facial expressions. Since human faces can undergo a wide range of deformation, there is not a single point on the skin surface that moves rigidly with the underlying skull. Consequently, computing the rigid transformation from direct observation, a common approach in previous methods, is error prone and leads to inaccurate results. Instead, we propose to indirectly stabilize the expressions by explicitly aligning them to an estimate of the underlying skull using anatomically-motivated constraints. We show that the proposed method not only outperforms existing techniques but is also on par with manual stabilization, yet requires less than a second of computation time.

Supplementary Material

ZIP File (a44-beeler.zip)
Supplemental material.

References

[1]
Alexander, O., Rogers, M., Lambeth, W., Chiang, J.-Y., Ma, W.-C., Wang, C.-C., and Debevec, P. 2010. The digital emily project: Achieving a photoreal digital actor. IEEE Computer Graphics and Applications 30, 4, 20--31.
[2]
Ali-Hamadi, D., Liu, T., Gilles, B., Kavan, L., Faure, F., Palombi, O., and Cani, M. 2013. Anatomy transfer. ACM Trans. Graph. 32, 6 (Nov.), 188:1--188:8.
[3]
Amberg, B., and Vetter, T. 2011. Optimal landmark detection using shape models and branch and bound. In Int. Conference on Computer Vision (ICCV).
[4]
Arun, K. S., Huang, T. S., and Blostein, S. D. 1987. Least-squares fitting of two 3-d point sets. IEEE Trans. PAMI 9, 5, 698--700.
[5]
Beeler, T., Bickel, B., Sumner, R., Beardsley, P., and Gross, M. 2010. High-quality single-shot capture of facial geometry. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 40:1--40:9.
[6]
Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 75:1--75:10.
[7]
Besl, P. J., and McKay, N. D. 1992. A method for registration of 3-d shapes. IEEE Trans. PAMI 14, 2, 239--256.
[8]
Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. SIGGRAPH, 187--194.
[9]
Botsch, M., and Sorkine, O. 2008. On linear variational surface deformation methods. IEEE TVCG 14, 1, 213--230.
[10]
Bouaziz, S., Wang, Y., and Pauly, M. 2013. Online modeling for realtime facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 40:1--40:10.
[11]
Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 41:1--41:10.
[12]
Cao, C., Weng, Y., Zhou, S., Tong, Y., and Zhou, K. 2013. Facewarehouse: A 3d facial expression database for visual computing. IEEE TVCG.
[13]
Cao, C., Weng, Y., Lin, S., and Zhou, K. 2013. 3d shape regression for real-time facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 41:1--41:10.
[14]
Dale, K., Sunkavalli, K., Johnson, M. K., Vlasic, D., Matusik, W., and Pfister, H. 2011. Video face replacement. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 30, 6, 130:1--130:10.
[15]
Fyffe, G., Hawkins, T., Watts, C., Ma, W.-C., and Debevec, P. 2011. Comprehensive facial performance capture. In Eurographics 2011.
[16]
Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Yu, X., and Debevec, P. 2011. Multiview face capture using polarized spherical gradient illumination. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 30, 6, 129:1--129:10.
[17]
Gower, J. C. 1975. Generalized procrustes analysis. Psychometrika 40, 1.
[18]
Huang, H., Chai, J., Tong, X., and Wu, H.-T. 2011. Leveraging motion capture and 3d scanning for high-fidelity facial performance acquisition. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 74:1--74:10.
[19]
Li, H., Adams, B., Guibas, L. J., and Pauly, M. 2009. Robust single-view geometry and motion reconstruction. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 28, 5, 175:1--175:10.
[20]
Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graphics (Proc. SIGGRAPH) 29, 4, 32:1--32:6.
[21]
Li, H., Yu, J., Ye, Y., and Bregler, C. 2013. Realtime facial animation with on-the-fly correctives. ACM Trans. Graphics (Proc. SIGGRAPH) 32, 4, 42:1--42:10.
[22]
Ma, W.-C., Hawkins, T., Peers, P., Chabert, C.-F., Weiss, M., and Debevec, P. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Eurographics Symposium on Rendering, 183--194.
[23]
Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graphics (Proc. SIGGRAPH) 23, 3, 399--405.
[24]
Vlasic, D., Brand, M., Pfister, H., and Popović, J. 2005. Face transfer with multilinear models. ACM Trans. Graphics (Proc. SIGGRAPH) 24, 3, 426--433.
[25]
Weise, T., Li, H., Van Gool, L., and Pauly, M. 2009. Face/off: live facial puppetry. In Proc. Symposium on Computer Animation, 7--16.
[26]
Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Trans. Graphics (Proc. SIGGRAPH) 30, 4, 77:1--77:10.

Cited By

View all
  • (2024)AnaConDaR: Anatomically-Constrained Data-Adaptive Facial RetargetingComputers & Graphics10.1016/j.cag.2024.103988122(103988)Online publication date: Aug-2024
  • (2024)SparseSoftDECA — Efficient high-resolution physics-based facial animation from sparse landmarksComputers & Graphics10.1016/j.cag.2024.103903119(103903)Online publication date: Apr-2024
  • (2023)SoftDECA: Computationally Efficient Physics-Based Facial AnimationsProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624439(1-11)Online publication date: 15-Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 33, Issue 4
July 2014
1366 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/2601097
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 July 2014
Published in TOG Volume 33, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. face scanning
  2. rigid stabilization

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)25
  • Downloads (Last 6 weeks)3
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)AnaConDaR: Anatomically-Constrained Data-Adaptive Facial RetargetingComputers & Graphics10.1016/j.cag.2024.103988122(103988)Online publication date: Aug-2024
  • (2024)SparseSoftDECA — Efficient high-resolution physics-based facial animation from sparse landmarksComputers & Graphics10.1016/j.cag.2024.103903119(103903)Online publication date: Apr-2024
  • (2023)SoftDECA: Computationally Efficient Physics-Based Facial AnimationsProceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3623264.3624439(1-11)Online publication date: 15-Nov-2023
  • (2023)Continuous Landmark Detection with 3D Queries2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01617(16858-16867)Online publication date: Jun-2023
  • (2022)An Implicit Parametric Morphable Dental ModelACM Transactions on Graphics10.1145/3550454.355546941:6(1-13)Online publication date: 30-Nov-2022
  • (2022)SCULPTORACM Transactions on Graphics10.1145/3550454.355546241:6(1-17)Online publication date: 30-Nov-2022
  • (2022)FDLS: A Deep Learning Approach to Production Quality, Controllable, and Retargetable Facial Performances.Proceedings of the 2022 Digital Production Symposium10.1145/3543664.3543672(1-9)Online publication date: 7-Aug-2022
  • (2022)Local anatomically-constrained facial performance retargetingACM Transactions on Graphics10.1145/3528223.353011441:4(1-14)Online publication date: 22-Jul-2022
  • (2022)Adjoint nonlinear ray tracingACM Transactions on Graphics10.1145/3528223.353007741:4(1-13)Online publication date: 22-Jul-2022
  • (2022)A unified newton barrier method for multibody dynamicsACM Transactions on Graphics10.1145/3528223.353007641:4(1-14)Online publication date: 22-Jul-2022
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media