ABSTRACT
Human communication relies on multiple modalities such as verbal expressions, facial cues, and bodily gestures. Developing computational approaches to process and generate these multimodal signals is critical for seamless human-agent interaction. A particular challenge is the generation of co-speech gestures due to the large variability and number of gestures that can accompany a verbal utterance, leading to a one-to-many mapping problem. This paper presents an approach based on a Feature Extraction Infusion Network (FEIN-Z) that adopts insights from robot imitation learning and applies them to co-speech gesture generation. Building on the BC-Z architecture, our framework combines transformer architectures and Wasserstein generative adversarial networks. We describe the FEIN-Z methodology and evaluation results obtained within the GENEA Challenge 2023, demonstrating good results and significant improvements in human-likeness over the GENEA baseline. We discuss potential areas for improvement, such as refining input segmentation, employing more fine-grained control networks, and exploring alternative inference methods.
- Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do As I Can and Not As I Say: Grounding Language in Robotic Affordances. In arXiv preprint arXiv:2204.01691.Google Scholar
- Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, and Louis-Philippe Morency. 2020. No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 1884–1895. https://doi.org/10.18653/v1/2020.findings-emnlp.170Google ScholarCross Ref
- Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, and Louis-Philippe Morency. 2020. Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach. https://doi.org/10.48550/arXiv.2007.12553 arXiv:2007.12553 [cs].Google ScholarCross Ref
- Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. 2020. Generating coherent spontaneous speech and gesture from text. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. 1–3. https://doi.org/10.1145/3383652.3423874 arXiv:2101.05684 [cs, eess].Google ScholarDigital Library
- James Allen, Mehdi Manshadi, Myroslava Dzikovska, and Mary Swift. 2007. Deep linguistic processing for spoken dialogue systems. In Proceedings of the Workshop on Deep Linguistic Processing - DeepLP ’07. Association for Computational Linguistics, Prague, Czech Republic, 49. https://doi.org/10.3115/1608912.1608922Google ScholarCross Ref
- Kirsten Bergmann, Sebastian Kahl, and Stefan Kopp. 2013. Modeling the Semantic Coordination of Speech and Gesture under Cognitive and Linguistic Constraints. In Intelligent Virtual Agents, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Ruth Aylett, Brigitte Krenn, Catherine Pelachaud, and Hiroshi Shimodaira (Eds.). Vol. 8108. Springer Berlin Heidelberg, Berlin, Heidelberg, 203–216. https://doi.org/10.1007/978-3-642-40415-3_18 Series Title: Lecture Notes in Computer Science.Google ScholarCross Ref
- Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. https://doi.org/10.48550/arXiv.1607.04606 arXiv:1607.04606 [cs].Google ScholarCross Ref
- Matthew Brand and Aaron Hertzmann. 2000. Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques(SIGGRAPH ’00). ACM Press/Addison-Wesley Publishing Co., USA, 183–192. https://doi.org/10.1145/344779.344865Google ScholarDigital Library
- Justine Cassell, David Mcneill, and Karl-Erik Mccullough. 1994. Speech-Gesture Mismatches: Evidence for One Underlying Representation of Linguistic and Nonlinguistic Information. Cognition 7 (Jan. 1994). https://doi.org/10.1075/pc.7.1.03casGoogle ScholarCross Ref
- Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. 2004. BEAT: the Behavior Expression Animation Toolkit. In Life-Like Characters: Tools, Affective Functions, and Applications, Helmut Prendinger and Mitsuru Ishizuka (Eds.). Springer, Berlin, Heidelberg, 163–185. https://doi.org/10.1007/978-3-662-08373-4_8Google ScholarCross Ref
- Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry to the GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention Mechanism. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 784–789. https://doi.org/10.1145/3536221.3558060Google ScholarDigital Library
- Chung-Cheng Chiu, Louis-Philippe Morency, and Stacy Marsella. 2015. Predicting Co-verbal Gestures: A Deep and Temporal Modeling Approach. In Intelligent Virtual Agents, Willem-Paul Brinkman, Joost Broekens, and Dirk Heylen (Eds.). Vol. 9238. Springer International Publishing, Cham, 152–166. https://doi.org/10.1007/978-3-319-21996-7_17 Series Title: Lecture Notes in Computer Science.Google ScholarCross Ref
- Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. https://doi.org/10.48550/arXiv.1406.1078 arXiv:1406.1078 [cs, stat].Google ScholarCross Ref
- Sharice Clough and Melissa C. Duff. 2020. The Role of Gesture in Communication and Cognition: Implications for Understanding and Treating Neurogenic Communication Disorders. Frontiers in Human Neuroscience 14 (2020). https://doi.org/10.3389/fnhum.2020.00323Google ScholarCross Ref
- Ilaria Cutica and Monica Bucciarelli. 2011. “The More You Gesture, the Less I Gesture”: Co-Speech Gestures as a Measure of Mental Model Quality. Journal of Nonverbal Behavior 35, 3 (Sept. 2011), 173–187. https://doi.org/10.1007/s10919-011-0112-7Google ScholarCross Ref
- Ilaria Cutica and Monica Bucciarelli. 2013. Cognitive change in learning from text: Gesturing enhances the construction of the text mental model. Journal of Cognitive Psychology 25, 2 (March 2013), 201–209. https://doi.org/10.1080/20445911.2012.743987Google ScholarCross Ref
- Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. https://doi.org/10.48550/arXiv.2010.11929 arXiv:2010.11929 [cs].Google ScholarCross Ref
- Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2019. Multi-objective adversarial gesture generation. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games(MIG ’19). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3359566.3360053Google ScholarDigital Library
- Aphrodite Galata, Neil Johnson, and David Hogg. 2001. Learning Variable-Length Markov Models of Behavior. Computer Vision and Image Understanding 81, 3 (March 2001), 398–413. https://doi.org/10.1006/cviu.2000.0894Google ScholarDigital Library
- Chongkai Gao, Haichuan Gao, Shangqi Guo, Tianren Zhang, and Feng Chen. 2021. CRIL: Continual Robot Imitation Learning via Generative and Prediction Model. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 6747–5754. https://doi.org/10.1109/IROS51168.2021.9636069 ISSN: 2153-0866.Google ScholarDigital Library
- Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, and Jitendra Malik. 2019. Learning Individual Styles of Conversational Gesture. https://doi.org/10.48550/arXiv.1906.04160 arXiv:1906.04160 [cs, eess].Google ScholarCross Ref
- F. Sebastian Grassia. 1998. Practical Parameterization of Rotations Using the Exponential Map. Journal of Graphics Tools 3, 3 (Jan. 1998), 29–48. https://doi.org/10.1080/10867651.1998.10487493Google ScholarDigital Library
- Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and Kazuhiko Sumi. 2018. Evaluation of Speech-to-Gesture Generation Using Bi-Directional LSTM Network. In Proceedings of the 18th International Conference on Intelligent Virtual Agents(IVA ’18). Association for Computing Machinery, New York, NY, USA, 79–86. https://doi.org/10.1145/3267851.3267878Google ScholarDigital Library
- Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2020. MoGlow: Probabilistic and controllable motion synthesis using normalising flows. ACM Transactions on Graphics 39, 6 (Dec. 2020), 1–14. https://doi.org/10.1145/3414685.3417836 arXiv:1905.06598 [cs, eess, stat].Google ScholarDigital Library
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (Nov. 1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 Conference Name: Neural Computation.Google ScholarDigital Library
- Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Transactions on Graphics 36, 4 (Aug. 2017), 1–13. https://doi.org/10.1145/3072959.3073663Google ScholarDigital Library
- Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, and Chelsea Finn. 2022. BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. In Proceedings of the 5th Conference on Robot Learning(Proceedings of Machine Learning Research, Vol. 164), Aleksandra Faust, David Hsu, and Gerhard Neumann (Eds.). PMLR, 991–1002. https://proceedings.mlr.press/v164/jang22a.htmlGoogle Scholar
- Naoshi Kaneko, Yuna Mitsubayashi, and Geng Mu. 2022. TransGesture: Autoregressive Gesture Generation with RNN-Transducer. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 753–757. https://doi.org/10.1145/3536221.3558061Google ScholarDigital Library
- Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R. Thórisson, and Hannes Vilhjálmsson. 2006. Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. In Intelligent Virtual Agents(Lecture Notes in Computer Science), Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Patrick Olivier (Eds.). Springer, Berlin, Heidelberg, 205–217. https://doi.org/10.1007/11821830_17Google ScholarDigital Library
- Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2022. ReCell: replicating recurrent cell for auto-regressive pose generation. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 94–97. https://doi.org/10.1145/3536220.3558801Google ScholarDigital Library
- Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the ACM International Conference on Multimodal Interaction(ICMI ’23). ACM.Google ScholarDigital Library
- Dong Won Lee, Chaitanya Ahuja, and Louis-Philippe Morency. 2021. Crossmodal Clustered Contrastive Learning: Grounding of Spoken Language to Gesture. In Companion Publication of the 2021 International Conference on Multimodal Interaction. ACM, Montreal QC Canada, 202–210. https://doi.org/10.1145/3461615.3485408Google ScholarDigital Library
- Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha Srinivasa, and Yaser Sheikh. 2019. Talking With Hands 16.2M: A Large-Scale Dataset of Synchronized Body-Finger Motion and Audio for Conversational Motion Analysis and Synthesis. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, Seoul, Korea (South), 763–772. https://doi.org/10.1109/ICCV.2019.00085Google ScholarCross Ref
- Jina Lee and Stacy Marsella. 2006. Nonverbal Behavior Generator for Embodied Conversational Agents. In Intelligent Virtual Agents(Lecture Notes in Computer Science), Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Patrick Olivier (Eds.). Springer, Berlin, Heidelberg, 243–255. https://doi.org/10.1007/11821830_20Google ScholarDigital Library
- Yang Li, Jin Huang, Feng Tian, Hong-An Wang, and Guo-Zhong Dai. 2019. Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware 1, 1 (Feb. 2019), 84–112. https://doi.org/10.3724/SP.J.2096-5796.2018.0006Google ScholarCross Ref
- Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, and Jeannette Bohg. 2023. Text2Motion: From Natural Language Instructions to Feasible Plans. arxiv:2303.12153 [cs.RO]Google Scholar
- Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. https://doi.org/10.48550/arXiv.1711.05101 arXiv:1711.05101 [cs, math].Google ScholarCross Ref
- Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEA Challenge 2022. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 790–796. https://doi.org/10.1145/3536221.3558059Google ScholarDigital Library
- Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. [n. d.]. Rectifier Nonlinearities Improve Neural Network Acoustic Models. ([n. d.]).Google Scholar
- Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, and Ari Shapiro. 2013. Virtual character performance from speech. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, Anaheim California, 25–35. https://doi.org/10.1145/2485895.2485900Google ScholarDigital Library
- Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-Speech Gesture Generation. Computer Graphics Forum 42, 2 (May 2023), 569–596. https://doi.org/10.1111/cgf.14776 arXiv:2301.05339 [cs].Google ScholarCross Ref
- Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier, Nicolas Heess, and Martin Riedmiller. 2023. Towards A Unified Agent with Foundation Models. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023. https://openreview.net/forum?id=JK_B1tB6p-Google Scholar
- Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.Google ScholarCross Ref
- Khaled Saleh. 2022. Hybrid Seq2Seq Architecture for 3D Co-Speech Gesture Generation. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 748–752. https://doi.org/10.1145/3536221.3558064Google ScholarDigital Library
- Noam Shazeer. 2020. GLU Variants Improve Transformer. CoRR abs/2002.05202 (2020). arXiv:2002.05202https://arxiv.org/abs/2002.05202Google Scholar
- Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu, and Jianye Hao. [n. d.]. Co-Speech Gesture Synthesis by Reinforcement Learning With Contrastive Pre-Trained Rewards. ([n. d.]).Google Scholar
- Graham W. Taylor and Geoffrey E. Hinton. 2009. Factored conditional restricted Boltzmann Machines for modeling motion style. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, Montreal Quebec Canada, 1025–1032. https://doi.org/10.1145/1553374.1553505Google ScholarDigital Library
- Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. https://doi.org/10.48550/arXiv.2302.13971 arXiv:2302.13971 [cs].Google ScholarCross Ref
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdfGoogle Scholar
- Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech in interaction: An overview. Speech Communication 57 (Feb. 2014), 209–232. https://doi.org/10.1016/j.specom.2013.09.008Google ScholarDigital Library
- Jonathan Windle, David Greenwood, and Sarah Taylor. 2022. UEA Digital Humans entry to the GENEA Challenge 2022. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 771–777. https://doi.org/10.1145/3536221.3558065Google ScholarDigital Library
- Bowen Wu, Chaoran Liu, Carlos T. Ishi, and Hiroshi Ishiguro. 2021. Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN. In Companion Publication of the 2021 International Conference on Multimodal Interaction. ACM, Montreal QC Canada, 194–201. https://doi.org/10.1145/3461615.3485407Google ScholarDigital Library
- Jiqing Wu, Zhiwu Huang, Janine Thoma, Dinesh Acharya, and Luc Van Gool. 2018. Wasserstein Divergence for GANs. https://doi.org/10.48550/arXiv.1712.01026 arXiv:1712.01026 [cs].Google ScholarCross Ref
- Sicheng Yang, Zhiyong Wu, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen, and Weihong Bao. 2022. The ReprGesture entry to the GENEA Challenge 2022. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 758–763. https://doi.org/10.1145/3536221.3558066Google ScholarDigital Library
- Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2020. Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity. ACM Transactions on Graphics 39, 6 (Dec. 2020), 1–16. https://doi.org/10.1145/3414685.3417838 arXiv:2009.02119 [cs].Google ScholarDigital Library
- Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, Montreal, QC, Canada, 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720Google ScholarDigital Library
- Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, Montreal, QC, Canada, 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720Google ScholarDigital Library
- Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 736–747. https://doi.org/10.1145/3536221.3558058Google ScholarDigital Library
- Chi Zhou, Tengyue Bian, and Kang Chen. 2022. GestureMaster: Graph-based Speech-driven Gesture Generation. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Bengaluru India, 764–770. https://doi.org/10.1145/3536221.3558063Google ScholarDigital Library
Index Terms
- FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation
Recommendations
AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis
ICMI '23: Proceedings of the 25th International Conference on Multimodal InteractionThe generation of realistic and contextually relevant co-speech gestures is a challenging yet increasingly important task in the creation of multimodal artificial agents. Prior methods focused on learning a direct correspondence between co-speech ...
Augmented Co-Speech Gesture Generation: Including Form and Meaning Features to Guide Learning-Based Gesture Synthesis
IVA '23: Proceedings of the 23rd ACM International Conference on Intelligent Virtual AgentsDue to their significance in human communication, the automatic generation of co-speech gestures in artificial embodied agents has received a lot of attention. Although modern deep learning approaches can generate realistic-looking conversational ...
Gesticulator: A framework for semantically-aware speech-driven gesture generation
ICMI '20: Proceedings of the 2020 International Conference on Multimodal InteractionDuring speech, people spontaneously gesticulate, which plays a key role in conveying information. Similarly, realistic co-speech gestures are crucial to enable natural and smooth interactions with social agents. Current end-to-end co-speech gesture ...
Comments