skip to main content
10.1145/3488162.3488226acmotherconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
short-paper

A review regarding the 3D facial animation pipeline

Published:03 January 2022Publication History

ABSTRACT

A large number of 3D facial animation techniques have emerged with different goals, to make the representation of facial expressions more realistic, decrease computing time, the need for specialized equipment, etc. With new techniques come new definitions, concepts, and terms that correlate with methods that have existed for decades or even more recent ones. Parameterization, interpolation, blendshapes, motion-capture and others are concepts that often appear in the literature, in a generic way, as techniques, but which actually have different levels in information hierarchy. This paper aims to clearly classify the different techniques and concepts of the 3D facial animation literature, locating them in each step of the 3D facial animation pipeline, through a parametric analysis.

References

  1. Tanine Allison. 2011. More than a man in a monkey suit: Andy Serkis, motion capture, and digital realism. Quarterly Review of Film and Video 28, 4 (2011), 325–341.Google ScholarGoogle ScholarCross RefCross Ref
  2. Curtis Andrus, Junghyun Ahn, Michele Alessi, Abdallah Dib, Philippe Gosselin, Cédric Thébault, Louis Chevallier, and Marco Romeo. 2020. FaceLab: Scalable Facial Performance Capture for Visual Effects. In The Digital Production Symposium. 1–3.Google ScholarGoogle Scholar
  3. Ronald M. Baecker. 1969. Picture-Driven Animation. In Proceedings of the May 14-16, 1969, Spring Joint Computer Conference (Boston, Massachusetts) (AFIPS ’69 (Spring)). Association for Computing Machinery, New York, NY, USA, 273–288. https://doi.org/10.1145/1476793.1476838Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mike Baxter. 1995. Product Design. Nelson Thornes, Cheltenham, England.Google ScholarGoogle Scholar
  5. Paul Bourke. 1999. Interpolation methods. Miscellaneous: projection, modelling, rendering 1, 10 (1999).Google ScholarGoogle Scholar
  6. Pif Edwards, Chris Landreth, Mateusz Pop\lawski, Robert Malinowski, Sarah Watling, Eugene Fiume, and Karan Singh. 2020. JALI-Driven Expressive Facial Animation and Multilingual Speech in Cyberpunk 2077. In ACM SIGGRAPH 2020 Talks(SIGGRAPH ’20). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3388767.3407339 event-place: Virtual Event, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. T. Ezzat and T. Poggio. 1998. MikeTalk: a talking facial display based on morphing visemes. Proceedings Computer Animation ’98 (Cat. No.98EX169) (1998), 96–102.Google ScholarGoogle ScholarCross RefCross Ref
  8. Fisher Cletus G.1968. Confusions Among Visually Perceived Consonants. Journal of Speech and Hearing Research 11, 4 (Dec. 1968), 796–804. https://doi.org/10.1044/jshr.1104.796 Publisher: American Speech-Language-Hearing Association.Google ScholarGoogle ScholarCross RefCross Ref
  9. E Friesen and Paul Ekman. 1978. Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3, 2 (1978), 5.Google ScholarGoogle Scholar
  10. Darren Hendler, Lucio Moser, Rishabh Battulwar, David Corral, Phil Cramer, Ron Miller, Rickey Cloudsdale, and Doug Roble. 2018. Avengers: Capturing Thanos’s Complex Face. In ACM SIGGRAPH 2018 Talks(SIGGRAPH ’18). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3214745.3214766 event-place: Vancouver, British Columbia, Canada.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthias Hernandez, Jongmoo Choi, and Gérard Medioni. 2012. Laser scan quality 3-D face modeling using a low-cost depth camera. Journal Abbreviation: European Signal Processing Conference Pages: 1999 Publication Title: European Signal Processing Conference.Google ScholarGoogle Scholar
  12. Ahmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, and Sachin Kajareker. 2020. Modality Dropout for Improved Performance-Driven Talking Faces. In Proceedings of the 2020 International Conference on Multimodal Interaction(ICMI ’20). Association for Computing Machinery, New York, NY, USA, 378–386. https://doi.org/10.1145/3382507.3418840 event-place: Virtual Event, Netherlands.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Alexandru-Eugen Ichim, Petr Kadleček, Ladislav Kavan, and Mark Pauly. 2017. Phace: Physics-Based Face Modeling and Animation. ACM Trans. Graph. 36, 4 (July 2017). https://doi.org/10.1145/3072959.3073664 Place: New York, NY, USA Publisher: Association for Computing Machinery.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Isaac V Kerlow. 2009. The art of 3D computer animation and effects. John Wiley & Sons.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, and Jaakko Lehtinen. 2017. Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks. In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation(SCA ’17). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3099564.3099581 event-place: Los Angeles, California.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. John Lewis. 1991. Automated lip-sync: Background and techniques. The Journal of Visualization and Computer Animation 2, 4 (1991), 118–122. https://doi.org/10.1002/vis.4340020404 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/vis.4340020404Google ScholarGoogle ScholarCross RefCross Ref
  17. Jiaman Li, Zhengfei Kuang, Yajie Zhao, Mingming He, Karl Bladin, and Hao Li. 2020. Dynamic facial asset and rig generation from a single scan.ACM Trans. Graph. 39, 6 (2020), 215–1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 3723–3726. https://doi.org/10.1145/2851581.2890247 event-place: San Jose, California, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Debanga R. Neog, João L. Cardoso, Anurag Ranjan, and Dinesh K. Pai. 2016. Interactive Gaze Driven Animation of the Eye Region. In Proceedings of the 21st International Conference on Web3D Technology(Web3D ’16). Association for Computing Machinery, New York, NY, USA, 51–59. https://doi.org/10.1145/2945292.2945298 event-place: Anaheim, California.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Tina O’Hailey. 2018. Rig it right! Maya animation rigging concepts. Routledge.Google ScholarGoogle Scholar
  21. Verónica Orvalho, Pedro Bastos, Frederic I Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey.. In Eurographics (State of the Art Reports). 183–204.Google ScholarGoogle Scholar
  22. Parke. 1982. Parameterized Models for Facial Animation. IEEE Computer Graphics and Applications 2, 9 (Nov. 1982), 61–68. https://doi.org/10.1109/MCG.1982.1674492Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Frederick I. Parke. 1972. Computer Generated Animation of Faces. In Proceedings of the ACM Annual Conference - Volume 1(ACM ’72). Association for Computing Machinery, New York, NY, USA, 451–457. https://doi.org/10.1145/800193.569955 event-place: Boston, Massachusetts, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Frederic Ira Parke. 1974. A parametric model for human faces.The University of Utah.Google ScholarGoogle Scholar
  25. Frederic I Parke and Keith Waters. 2008. Computer facial animation. CRC press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Robert Penner. 2002. Robert Penner’s Programming Macromedia Flash MX. McGraw-Hill, Inc.Google ScholarGoogle Scholar
  27. Frederic Pighin and John P Lewis. 2006. Facial motion retargeting. In ACM SIGGRAPH 2006 Courses. 2–es.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Heng Yu Ping, Lili Nurliyana Abdullah, Puteri Suhaiza Sulaiman, and Alfian Abdul Halin. 2013. Computer facial animation: A review. International Journal of Computer Theory and Engineering 5, 4(2013), 658.Google ScholarGoogle ScholarCross RefCross Ref
  29. Shridhar Ravikumar, Colin Davidson, Dmitry Kit, Neill Campbell, Luca Benedetti, and Darren Cosker. 2016. Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation. In Proceedings of the 42nd Graphics Interface Conference(GI ’16). Canadian Human-Computer Communications Society, Waterloo, CAN, 143–151. event-place: Victoria, British Columbia, Canada.Google ScholarGoogle Scholar
  30. Daniele Regazzoni, Giordano de Vecchi, and Caterina Rizzi. 2014. RGB cams vs RGB-D sensors: Low cost motion capture technologies performances and limitations. Journal of Manufacturing Systems 33, 4 (Oct. 2014), 719–728. https://doi.org/10.1016/j.jmsy.2014.07.011Google ScholarGoogle ScholarCross RefCross Ref
  31. Jason Schleifer. 2002. Maya techniques: Integrating a creature animation rig within a production pipeline.Google ScholarGoogle Scholar
  32. Yeongho Seol, Wan-Chun Ma, and J. P. Lewis. 2016. Creating an Actor-Specific Facial Rig from Performance Capture. In Proceedings of the 2016 Symposium on Digital Production(DigiPro ’16). Association for Computing Machinery, New York, NY, USA, 13–17. https://doi.org/10.1145/2947688.2947693 event-place: Anaheim, California.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. D. Sturman. 1994. A Brief History of Motion Capture for Computer Character Animation. In SIGGRAPH 1994.Google ScholarGoogle Scholar
  34. T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L. Morency. 2018. OpenFace 2.0: Facial Behavior Analysis Toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). 59–66. https://doi.org/10.1109/FG.2018.00019 Journal Abbreviation: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Demetri Terzopoulos and Keith Waters. 1990. Physically-based facial modelling, analysis, and animation. The journal of visualization and computer animation 1, 2 (1990), 73–80.Google ScholarGoogle Scholar
  36. Stef van der Struijk, Hung-Hsuan Huang, Maryam Sadat Mirzaei, and Toyoaki Nishida. 2018. FACSvatar: An Open Source Modular Framework for Real-Time FACS Based Facial Animation. In Proceedings of the 18th International Conference on Intelligent Virtual Agents(IVA ’18). Association for Computing Machinery, New York, NY, USA, 159–164. https://doi.org/10.1145/3267851.3267918 event-place: Sydney, NSW, Australia.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Oliver Villar. 2014. Learning Blender: a hands-on guide to creating 3D animated characters. Addison-Wesley Professional.Google ScholarGoogle Scholar
  38. Youngkyoo Hwang, Jung-Bae Kim, Xuetao Feng, Won-Chul Bang, Taehyun Rhee, James D.K. Kim, and ChangYeong Kim. 2012. Markerless 3D facial motion capture system, Vol. 8289. https://doi.org/10.1117/12.907528Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A review regarding the 3D facial animation pipeline
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          SVR '21: Proceedings of the 23rd Symposium on Virtual and Augmented Reality
          October 2021
          196 pages
          ISBN:9781450395526
          DOI:10.1145/3488162

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 3 January 2022

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper
          • Research
          • Refereed limited
        • Article Metrics

          • Downloads (Last 12 months)54
          • Downloads (Last 6 weeks)5

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format