skip to main content
research-article

Facial retargeting with automatic range of motion alignment

Published: 20 July 2017 Publication History

Abstract

While facial capturing focuses on accurate reconstruction of an actor's performance, facial animation retargeting has the goal to transfer the animation to another character, such that the semantic meaning of the animation remains. Because of the popularity of blendshape animation, this effectively means to compute suitable blendshape weights for the given target character. Current methods either require manually created examples of matching expressions of actor and target character, or are limited to characters with similar facial proportions (i.e., realistic models). In contrast, our approach can automatically retarget facial animations from a real actor to stylized characters. We formulate the problem of transferring the blendshapes of a facial rig to an actor as a special case of manifold alignment, by exploring the similarities of the motion spaces defined by the blendshapes and by an expressive training sequence of the actor. In addition, we incorporate a simple, yet elegant facial prior based on discrete differential properties to guarantee smooth mesh deformation. Our method requires only sparse correspondences between characters and is thus suitable for retargeting marker-less and marker-based motion capture as well as animation transfer between virtual characters.

Supplementary Material

ZIP File (a154-blanco-i-ribera.zip)
Supplemental files.

References

[1]
Ken Anjyo, Hideki Todo, and J. P. Lewis. 2012. A Practical Approach to Direct Manipulation Blendshapes. Journal of Graphics Tools 16, 3 (2012), 160--176.
[2]
Autodesk. 2016. MAYA. (2016). www.autodesk.com/maya
[3]
Vincent Barrielle, Nicolas Stoiber, and Cédric Cagniart. 2016. BlendForces: A Dynamic Framework for Facial Animation. Computer Graphics Forum 35, 2 (2016).
[4]
Mikhail Belkin and Partha Niyogi. 2005. Towards a Theoretical Foundation for Laplacian-Based Manifold Methods. In Proc. Conference on Learning Theory. 486--500.
[5]
Bernd Bickel, Mario Botsch, Roland Angst, Wojciech Matusik, Miguel Otaduy, Hanspeter Pfister, and Markus Gross. 2007. Multi-scale Capture of Facial Geometry and Motion. ACM Trans. Graph. 26, 3 (2007).
[6]
Mario Botsch and Olga Sorkine. 2008. On Linear Variational Surface Deformation Methods. IEEE Trans. Vis. Comput. Graphics 14, 1 (2008), 213--230.
[7]
Sofien Bouaziz and Mark Pauly. 2014. Semi-Supervised Facial Animation Retargeting. Technical Report 202143. EPFL.
[8]
Sofien Bouaziz, Yangang Wang, and Mark Pauly. 2013. Online Modeling for Realtime Facial Animation. ACM Trans. Graph. 32, 4 (2013), 40:1--40:10.
[9]
Christoph Bregler, Lorie Loeb, Erika Chuang, and Hrishi Deshpande. 2002. Turning to the Masters: Motion Capturing Cartoons. ACM Trans. Graph. 21, 3 (2002), 399--407.
[10]
Ian Buck, Adam Finkelstein, Charles Jacobs, Allison Klein, David H. Salesin, Joshua Seims, Richard Szeliski, and Kentaro Toyama. 2000. Performance-driven Hand-drawn Animation. In Proc. Symp. on Non-Photorealistic Animation and Rendering. 101--108.
[11]
Chen Cao, Qiming Hou, and Kun Zhou. 2014. Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation. ACM Trans. Graph. 33, 4 (2014), 43:1--43:10.
[12]
Erika Chuang and Christoph Bregler. 2002. Performance Driven Animation using Blendshape Interpolation. Technical Report CS-TR-2002-02. Stanford University.
[13]
Patrick Coleman, Jacobo Bibliowicz, Karan Singh, and Michael Gleicher. 2008. Staggered Poses: A Character Motion Representation for Detail-preserving Editing of Pose and Coordinated Timing. In Proc. Symp. on Computer Animation. 137--146.
[14]
Zhen Cui, Shiguang Shan, Haihong Zhang, Shihong Lao, and Xilin Chen. 2012. Image Sets Alignment for Video-Based Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 2626--2633.
[15]
Zhigang Deng, Pei-Ying Chiang, Pamela Fox, and Ulrich Neumann. 2006. Animating Blendshape Faces by Cross-mapping Motion Capture Data. In Proc. Symp. on Interactive 3D Graphics and Games. 43--48.
[16]
Paul Ekman and Wallace V. Friesen. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press.
[17]
Ke Fan, Ajmal Mian, Wanquan Liu, and Ling Li. 2016. Unsupervised manifold alignment using soft-assign technique. Machine Vision and Applications 27, 6 (2016), 929--942.
[18]
GaËl Guennebaud, Benoît Jacob, and others. 2016. Eigen v3.3. (2016). http://eigen.tuxfamily.org
[19]
Alexandru Eugen Ichim, Sofien Bouaziz, and Mark Pauly. 2015. Dynamic 3D Avatar Creation from Hand-held Video Input. ACM Trans. Graph. 34, 4 (2015), 45:1--45:14.
[20]
Alexandru-Eugen Ichim, Ladislav Kavan, Merlin Nimier-David, and Mark Pauly. 2016. Building and Animating User-specific Volumetric Face Rigs. In Proc. Symp. on Computer Animation. 107--117.
[21]
Eric Jones, Travis Oliphant, Pearu Peterson, and others. 2001. SciPy: Open source scientific tools for Python. (2001). http://www.scipy.org/
[22]
Natasha Kholgade, Iain Matthews, and Yaser Sheikh. 2011. Content Retargeting Using Parameter-parallel Facial Layers. In Proc. Symp. on Computer Animation. 195--204.
[23]
Ravikrishna Kolluri, Jonathan R. Shewchuk, and James F. O'Brien. 2004. Spectral Surface Reconstruction from Noisy Point Clouds. In Proc. Symp. of Geometry Processing. 11--21.
[24]
Jérôme Kunegis, Stephan Schmidt, Andreas Lommatzsch, Jürgen Lerner, Ernesto W. De Luca, and Sahin Albayrak. 2010. Spectral Analysis of Signed Graphs for Clustering, Prediction and Visualization. In Proc. Int. Conference on Data Mining.
[25]
Manfred Lau, Jinxiang Chai, Ying-Qing Xu, and Heung-Yeung Shum. 2009. Face Poser: Interactive Modeling of 3D Facial Expressions Using Facial Priors. ACM Trans. Graph. 29, 1 (2009), 3:1--3:17.
[26]
J. P. Lewis, Ken Anjyo, Taehyun Rhee, Mengjie Zhang, Fred Pighin, and Zhigang Deng. 2014. Practice and Theory of Blendshape Facial Models. In Eurographics State of the Art Reports.
[27]
J. P. Lewis, Jonathan Mooser, Zhigang Deng, and Ulrich Neumann. 2005. Reducing Blendshape Interference by Selected Motion Attenuation. In Proc. Symp. on Interactive 3D Graphics and Games. 25--29.
[28]
Hao Li, Thibaut Weise, and Mark Pauly. 2010. Example-based Facial Rigging. ACM Trans. Graph. 29, 4 (2010), 32:1--32:6.
[29]
Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime Facial Animation with On-the-fly Correctives. ACM Trans. Graph. 32, 4 (2013), 42:1--42:10.
[30]
Junyong Noh and Ulrich Neumann. 2001. Expression Cloning. In Proc. of SIGGRAPH. 277--288.
[31]
Verónica Orvalho, Pedro Bastos, Frederic Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey. In Eurographics State of the Art Reports.
[32]
Verónica Costa Orvalho, Ernesto Zacur, and Antonio Susin. 2008. Transferring the Rig and Animations from a Character to Different Face Models. Computer Graphics Forum 27, 8 (2008), 1997--2012.
[33]
Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering 22, 10 (2010), 1345--1359.
[34]
Frederic I. Parke and Keith Waters. 2008. Computer Facial Animation (2 ed.). AK Peters Ltd.
[35]
Yuru Pei, Fengchun Huang, Fuhao Shi, and Hongbin Zha. 2012. Unsupervised Image Matching Based on Manifold Alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 8 (2012), 1658--1664.
[36]
Ulrich Pinkall and Konrad Polthier. 1993. Computing discrete minimal surfaces and their conjugates. Experimental Mathematics 2, 1 (1993), 15--36.
[37]
Jun Saito. 2013. Smooth Contact-aware Facial Blendshapes Transfer. In Proc. Symp. on Digital Production. 7--12.
[38]
Jaewoo Seo, Geoffrey Irving, J. P. Lewis, and Junyong Noh. 2011. Compression and Direct Manipulation of Complex Blendshape Models. ACM Trans. Graph. 30, 6 (2011), 164:1--164:10.
[39]
Yeongho Seol, J. P. Lewis, Jaewoo Seo, Byungkuk Choi, Ken Anjyo, and Junyong Noh. 2012. Spacetime Expression Cloning for Blendshapes. ACM Trans. Graph. 31, 2 (2012), 14:1--14:12.
[40]
Yeongho Seol, Wan-Chun Ma, and J. P. Lewis. 2016. Creating an Actor-specific Facial Rig from Performance Capture. In Proc. Symp. on Digital Production. 13--17.
[41]
Yeongho Seol, Jaewoo Seo, Paul Hyunjin Kim, J. P. Lewis, and Junyong Noh. 2011. Artist Friendly Facial Animation Retargeting. ACM Trans. Graph. 30, 6 (2011), 162:1--162:10.
[42]
Jaewon Song, Byungkuk Choi, Yeongho Seol, and Junyong Noh. 2011. Characteristic facial retargeting. Computer Animation and Virtual Worlds 22, 2--3 (2011), 187--194.
[43]
Robert W. Sumner and Jovan Popović. 2004. Deformation Transfer for Triangle Meshes. ACM Trans. Graph. 23, 3 (2004), 399--405.
[44]
Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2Face: Real-Time Face Capture and Reenactment of RGB Videos. In IEEE Conference on Computer Vision and Pattern Recognition. 2387--2395.
[45]
Chang Wang and Sridhar Mahadevan. 2009. Manifold Alignment Without Correspondence. In Proc. International Joint Conference on Artifical Intelligence. 1273--1278.
[46]
Chang Wang and Sridhar Mahadevan. 2011. Heterogeneous Domain Adaptation Using Manifold Alignment. In Proc. International Joint Conference on Artificial Intelligence. 1541--1546.
[47]
Chang Wang and Sridhar Mahadevan. 2013. Manifold Alignment Preserving Global Geometry. In Proc. International Joint Conference on Artificial Intelligence. 1743--1749.
[48]
Yang Wang, Xiaolei Huang, Chan-Su Lee, Song Zhang, Zhiguo Li, Dimitris Samaras, Dimitris Metaxas, Ahmed Elgammal, and Peisen Huang. 2004. High Resolution Acquisition, Learning and Transfer of Dynamic 3-D Facial Expressions. Computer Graphics Forum 23, 3 (2004), 677--686.
[49]
Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime Performance-based Facial Animation. ACM Trans. Graph. 30, 4 (2011), 77:1--77:10.
[50]
Feng Xu, Jinxiang Chai, Yilong Liu, and Xin Tong. 2014. Controllable High-fidelity Facial Performance Transfer. ACM Trans. Graph. 33, 4 (2014), 42:1--42:11.
[51]
Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, 2 (2005), 301--320.

Cited By

View all
  • (2024)Deep‐Learning‐Based Facial Retargeting Using Local PatchesComputer Graphics Forum10.1111/cgf.15263Online publication date: 25-Oct-2024
  • (2024)A review of motion retargeting techniques for 3D character facial animationComputers & Graphics10.1016/j.cag.2024.104037123(104037)Online publication date: Oct-2024
  • (2024)AnaConDaR: Anatomically-Constrained Data-Adaptive Facial RetargetingComputers & Graphics10.1016/j.cag.2024.103988122(103988)Online publication date: Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 36, Issue 4
August 2017
2155 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3072959
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 July 2017
Published in TOG Volume 36, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. blend-shapes
  2. facial animation
  3. retargeting

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)35
  • Downloads (Last 6 weeks)2
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Deep‐Learning‐Based Facial Retargeting Using Local PatchesComputer Graphics Forum10.1111/cgf.15263Online publication date: 25-Oct-2024
  • (2024)A review of motion retargeting techniques for 3D character facial animationComputers & Graphics10.1016/j.cag.2024.104037123(104037)Online publication date: Oct-2024
  • (2024)AnaConDaR: Anatomically-Constrained Data-Adaptive Facial RetargetingComputers & Graphics10.1016/j.cag.2024.103988122(103988)Online publication date: Aug-2024
  • (2024)Retargeting of facial model for unordered dense point cloudComputers & Graphics10.1016/j.cag.2024.103972122(103972)Online publication date: Aug-2024
  • (2024)A Facial Motion Retargeting Pipeline for Appearance Agnostic 3D CharactersComputer Animation and Virtual Worlds10.1002/cav.7000135:6Online publication date: 19-Nov-2024
  • (2023)An Implicit Physical Face Model Driven by Expression and StyleSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618156(1-12)Online publication date: 10-Dec-2023
  • (2023)FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation LearningProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614157(282-291)Online publication date: 9-Oct-2023
  • (2023)HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00834(9053-9064)Online publication date: 1-Oct-2023
  • (2023)Robust monocular 3D face reconstruction under challenging viewing conditionsNeurocomputing10.1016/j.neucom.2022.11.048520(82-93)Online publication date: Feb-2023
  • (2023)3D facial expression retargeting framework based on an identity-independent expression feature vectorMultimedia Tools and Applications10.1007/s11042-023-14547-282:15(23017-23034)Online publication date: 23-Feb-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media