Authors:
Sheldon Schiffer
1
;
Samantha Zhang
2
and
Max Levine
3
Affiliations:
1
Department of Computer Science, Occidental College, Los Angeles, California, U.S.A.
;
2
Department of Computer Science Cornell, University Ithaca, NY, U.S.A.
;
3
Department of Computer Science, University of North Carolina Asheville, Asheville, NC, U.S.A.
Keyword(s):
Facial Emotion Corpora, Video Game Workflow, Non-player Characters, Video Games, Affective Computing, Emotion AI, NPCs, Neural Networks.
Abstract:
The emergence of photorealistic and cinematic non-player character (NPC) animation presents new challenges for video game developers. Game player expectations of cinematic acting styles bring a more sophisticated aesthetic in the representation of social interaction. New methods can streamline workflow by integrating actor-driven character design into the development of game character AI and animation. A workflow that tracks actor performance to final neural network (NN) design depends on a rigorous method of producing single-actor video corpora from which to train emotion AI NN models. While numerous video corpora have been developed to study emotion elicitation of the face from which to test theoretical models and train neural networks to recognize emotion, developing single-actor corpora to train NNs of NPCs in video games is uncommon. A class of facial emotion recognition (FER) products have enabled production of single-actor video corpora that use emotion analysis data. This pap
er introduces a single-actor game character corpora workflow for game character developers. The proposed method uses a single actor video corpus and dataset with the intent to train and implement a NN in an off-the-shelf video game engine for facial animation of an NPC. The efficacy of using a NN-driven animation controller has already been demonstrated (Schiffer, 2021, Kozasa et. al 2006). This paper focuses on using a single-actor video corpus for the purpose of training a NN-driven animation controller.
(More)