ABSTRACT
A large number of 3D facial animation techniques have emerged with different goals, to make the representation of facial expressions more realistic, decrease computing time, the need for specialized equipment, etc. With new techniques come new definitions, concepts, and terms that correlate with methods that have existed for decades or even more recent ones. Parameterization, interpolation, blendshapes, motion-capture and others are concepts that often appear in the literature, in a generic way, as techniques, but which actually have different levels in information hierarchy. This paper aims to clearly classify the different techniques and concepts of the 3D facial animation literature, locating them in each step of the 3D facial animation pipeline, through a parametric analysis.
- Tanine Allison. 2011. More than a man in a monkey suit: Andy Serkis, motion capture, and digital realism. Quarterly Review of Film and Video 28, 4 (2011), 325–341.Google ScholarCross Ref
- Curtis Andrus, Junghyun Ahn, Michele Alessi, Abdallah Dib, Philippe Gosselin, Cédric Thébault, Louis Chevallier, and Marco Romeo. 2020. FaceLab: Scalable Facial Performance Capture for Visual Effects. In The Digital Production Symposium. 1–3.Google Scholar
- Ronald M. Baecker. 1969. Picture-Driven Animation. In Proceedings of the May 14-16, 1969, Spring Joint Computer Conference (Boston, Massachusetts) (AFIPS ’69 (Spring)). Association for Computing Machinery, New York, NY, USA, 273–288. https://doi.org/10.1145/1476793.1476838Google ScholarDigital Library
- Mike Baxter. 1995. Product Design. Nelson Thornes, Cheltenham, England.Google Scholar
- Paul Bourke. 1999. Interpolation methods. Miscellaneous: projection, modelling, rendering 1, 10 (1999).Google Scholar
- Pif Edwards, Chris Landreth, Mateusz Pop\lawski, Robert Malinowski, Sarah Watling, Eugene Fiume, and Karan Singh. 2020. JALI-Driven Expressive Facial Animation and Multilingual Speech in Cyberpunk 2077. In ACM SIGGRAPH 2020 Talks(SIGGRAPH ’20). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3388767.3407339 event-place: Virtual Event, USA.Google ScholarDigital Library
- T. Ezzat and T. Poggio. 1998. MikeTalk: a talking facial display based on morphing visemes. Proceedings Computer Animation ’98 (Cat. No.98EX169) (1998), 96–102.Google ScholarCross Ref
- Fisher Cletus G.1968. Confusions Among Visually Perceived Consonants. Journal of Speech and Hearing Research 11, 4 (Dec. 1968), 796–804. https://doi.org/10.1044/jshr.1104.796 Publisher: American Speech-Language-Hearing Association.Google ScholarCross Ref
- E Friesen and Paul Ekman. 1978. Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3, 2 (1978), 5.Google Scholar
- Darren Hendler, Lucio Moser, Rishabh Battulwar, David Corral, Phil Cramer, Ron Miller, Rickey Cloudsdale, and Doug Roble. 2018. Avengers: Capturing Thanos’s Complex Face. In ACM SIGGRAPH 2018 Talks(SIGGRAPH ’18). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3214745.3214766 event-place: Vancouver, British Columbia, Canada.Google ScholarDigital Library
- Matthias Hernandez, Jongmoo Choi, and Gérard Medioni. 2012. Laser scan quality 3-D face modeling using a low-cost depth camera. Journal Abbreviation: European Signal Processing Conference Pages: 1999 Publication Title: European Signal Processing Conference.Google Scholar
- Ahmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, and Sachin Kajareker. 2020. Modality Dropout for Improved Performance-Driven Talking Faces. In Proceedings of the 2020 International Conference on Multimodal Interaction(ICMI ’20). Association for Computing Machinery, New York, NY, USA, 378–386. https://doi.org/10.1145/3382507.3418840 event-place: Virtual Event, Netherlands.Google ScholarDigital Library
- Alexandru-Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly. 2017. Phace: Physics-Based Face Modeling and Animation. ACM Trans. Graph. 36, 4 (July 2017). https://doi.org/10.1145/3072959.3073664 Place: New York, NY, USA Publisher: Association for Computing Machinery.Google ScholarDigital Library
- Isaac V Kerlow. 2009. The art of 3D computer animation and effects. John Wiley & Sons.Google ScholarDigital Library
- Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen. 2017. Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks. In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation(SCA ’17). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3099564.3099581 event-place: Los Angeles, California.Google ScholarDigital Library
- John Lewis. 1991. Automated lip-sync: Background and techniques. The Journal of Visualization and Computer Animation 2, 4 (1991), 118–122. https://doi.org/10.1002/vis.4340020404 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/vis.4340020404Google ScholarCross Ref
- Jiaman Li, Zhengfei Kuang, Yajie Zhao, Mingming He, Karl Bladin, and Hao Li. 2020. Dynamic facial asset and rig generation from a single scan.ACM Trans. Graph. 39, 6 (2020), 215–1.Google ScholarDigital Library
- Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 3723–3726. https://doi.org/10.1145/2851581.2890247 event-place: San Jose, California, USA.Google ScholarDigital Library
- Debanga R. Neog, João L. Cardoso, Anurag Ranjan, and Dinesh K. Pai. 2016. Interactive Gaze Driven Animation of the Eye Region. In Proceedings of the 21st International Conference on Web3D Technology(Web3D ’16). Association for Computing Machinery, New York, NY, USA, 51–59. https://doi.org/10.1145/2945292.2945298 event-place: Anaheim, California.Google ScholarDigital Library
- Tina O’Hailey. 2018. Rig it right! Maya animation rigging concepts. Routledge.Google Scholar
- Verónica Orvalho, Pedro Bastos, Frederic I Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey.. In Eurographics (State of the Art Reports). 183–204.Google Scholar
- Parke. 1982. Parameterized Models for Facial Animation. IEEE Computer Graphics and Applications 2, 9 (Nov. 1982), 61–68. https://doi.org/10.1109/MCG.1982.1674492Google ScholarDigital Library
- Frederick I. Parke. 1972. Computer Generated Animation of Faces. In Proceedings of the ACM Annual Conference - Volume 1(ACM ’72). Association for Computing Machinery, New York, NY, USA, 451–457. https://doi.org/10.1145/800193.569955 event-place: Boston, Massachusetts, USA.Google ScholarDigital Library
- Frederic Ira Parke. 1974. A parametric model for human faces.The University of Utah.Google Scholar
- Frederic I Parke and Keith Waters. 2008. Computer facial animation. CRC press.Google ScholarDigital Library
- Robert Penner. 2002. Robert Penner’s Programming Macromedia Flash MX. McGraw-Hill, Inc.Google Scholar
- Frederic Pighin and John P Lewis. 2006. Facial motion retargeting. In ACM SIGGRAPH 2006 Courses. 2–es.Google ScholarDigital Library
- Heng Yu Ping, Lili Nurliyana Abdullah, Puteri Suhaiza Sulaiman, and Alfian Abdul Halin. 2013. Computer facial animation: A review. International Journal of Computer Theory and Engineering 5, 4(2013), 658.Google ScholarCross Ref
- Shridhar Ravikumar, Colin Davidson, Dmitry Kit, Neill Campbell, Luca Benedetti, and Darren Cosker. 2016. Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation. In Proceedings of the 42nd Graphics Interface Conference(GI ’16). Canadian Human-Computer Communications Society, Waterloo, CAN, 143–151. event-place: Victoria, British Columbia, Canada.Google Scholar
- Daniele Regazzoni, Giordano de Vecchi, and Caterina Rizzi. 2014. RGB cams vs RGB-D sensors: Low cost motion capture technologies performances and limitations. Journal of Manufacturing Systems 33, 4 (Oct. 2014), 719–728. https://doi.org/10.1016/j.jmsy.2014.07.011Google ScholarCross Ref
- Jason Schleifer. 2002. Maya techniques: Integrating a creature animation rig within a production pipeline.Google Scholar
- Yeongho Seol, Wan-Chun Ma, and J. P. Lewis. 2016. Creating an Actor-Specific Facial Rig from Performance Capture. In Proceedings of the 2016 Symposium on Digital Production(DigiPro ’16). Association for Computing Machinery, New York, NY, USA, 13–17. https://doi.org/10.1145/2947688.2947693 event-place: Anaheim, California.Google ScholarDigital Library
- D. Sturman. 1994. A Brief History of Motion Capture for Computer Character Animation. In SIGGRAPH 1994.Google Scholar
- T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L. Morency. 2018. OpenFace 2.0: Facial Behavior Analysis Toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). 59–66. https://doi.org/10.1109/FG.2018.00019 Journal Abbreviation: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).Google ScholarDigital Library
- Demetri Terzopoulos and Keith Waters. 1990. Physically-based facial modelling, analysis, and animation. The journal of visualization and computer animation 1, 2 (1990), 73–80.Google Scholar
- Stef van der Struijk, Hung-Hsuan Huang, Maryam Sadat Mirzaei, and Toyoaki Nishida. 2018. FACSvatar: An Open Source Modular Framework for Real-Time FACS Based Facial Animation. In Proceedings of the 18th International Conference on Intelligent Virtual Agents(IVA ’18). Association for Computing Machinery, New York, NY, USA, 159–164. https://doi.org/10.1145/3267851.3267918 event-place: Sydney, NSW, Australia.Google ScholarDigital Library
- Oliver Villar. 2014. Learning Blender: a hands-on guide to creating 3D animated characters. Addison-Wesley Professional.Google Scholar
- Youngkyoo Hwang, Jung-Bae Kim, Xuetao Feng, Won-Chul Bang, Taehyun Rhee, James D.K. Kim, and ChangYeong Kim. 2012. Markerless 3D facial motion capture system, Vol. 8289. https://doi.org/10.1117/12.907528Google ScholarCross Ref
Index Terms
- A review regarding the 3D facial animation pipeline
Recommendations
Lightweight wrinkle synthesis for 3D facial modeling and animation
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D ...
Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD Space
MM '23: Proceedings of the 31st ACM International Conference on MultimediaCreating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse. However, prevalent methods such as blendshape-based approaches and facial ...
3D shape regression for real-time facial animation
We present a real-time performance-driven facial animation system based on 3D shape regression. In this system, the 3D positions of facial landmark points are inferred by a regressor from 2D video frames of an ordinary web camera. From these 3D points, ...
Comments