ABSTRACT
For decades, animation has been a popular storytelling technique. Traditional tools for creating animations are labor-intensive requiring animators to painstakingly draw frames and motion curves by hand. An alternative workflow is to equip animators with direct real-time control over digital characters via performance, which offers a more immediate and efficient way to create animation. Even when using these existing expression transfer and lip sync methods, producing convincing facial animation in real-time is a challenging task. In this position paper, I describe my past and proposed future research in developing interactive systems for perceptually-valid expression retargeting from humans to stylized characters, real-time lip sync for 2D animation, and building an expressive style aligned embodied conversational agent.
- Deepali Aneja, Bindita Chaudhuri, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2018. Learning to generate 3D stylized character expressions from humans. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 160--169.Google ScholarCross Ref
- Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2016. Modeling stylized character expressions via deep learning. In Asian conference on computer vision. Springer, 136--153.Google Scholar
- Deepali Aneja, Daniel McDuff, and Shital Shah. 2019. AvatarSim. https://github.com/deepalianeja/AvatarSim. (2019).Google Scholar
- Character Animator. 2018. Adobe Character Animator CC 2018. Adobe Systems Incorporated. https://www.adobe.com/products/character-animator.htmlGoogle Scholar
- Faceware. Faceware Live.Google Scholar
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735--1780.Google ScholarDigital Library
- Rens Hoegen, Deepali Aneja, Daniel McDuff, and Mary Czerwinski. 2019. An End-to-End Conversational Style Matching Agent. arXiv preprint arXiv:1904.02760 (2019).Google Scholar
- Beibin Li, Sachin Mehta, Deepali Aneja, Claire Foster, Pamela Ventola, Frederick Shic, and Linda Shapiro. 2019. A Facial Affect Analysis System for Autism Spectrum Disorder. arXiv preprint arXiv:1904.03616 (2019).Google Scholar
- Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1717--1724.Google ScholarDigital Library
Index Terms
- Performance-based Expressive Character Animation
Recommendations
Physics-based character animation with cascadeur
SIGGRAPH '19: ACM SIGGRAPH 2019 StudioIn this workshop we will create a realistic acrobatic 3D fighting animation using the animation software Cascadeur. We will learn the key features of physics-based character animation and will immediately apply the learned knowledge by creating an ...
Layered acting for character animation
We introduce an acting-based animation system for creating and editing character animation at interactive speeds. Our system requires minimal training, typically under an hour, and is well suited for rapidly prototyping and creating expressive motion. A ...
Comments