- M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya, V. Makarenkov, and S. Nahavandi. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 76, 243–297. ISSN 1566-2535. https://www.sciencedirect.com/science/article/pii/S1566253521001081. DOI: .Google ScholarDigital Library
- C. Adam, W. Johal, D. Pellier, H. Fiorino, and S. Pesty. 2016. Social human–robot interaction: A new cognitive and affective interaction-oriented architecture. In A. Agah, J.-J. Cabibihan, A. M. Howard, M. A. Salichs, and H. He (Eds.), Social Robotics, Vol. 9979: Lecture Notes in Computer Science. Springer, Cham, 253–263. ISBN 978-3-319-47437-3. DOI: .Google ScholarCross Ref
- C. Ahuja, S. Ma, L.-P. Morency, and Y. Sheikh. 2019. To react or not to react: End-to-end visual pose forecasting for personalized avatar during dyadic conversations. In 2019 International Conference on Multimodal Interaction. ACM, 74–84. DOI: .Google ScholarDigital Library
- C. Ahuja, D. W. Lee, Y. I. Nakano, and L.-P. Morency. 2020. Style transfer for co-speech gesture animation: A multi-speaker conditional-mixture approach. In European Conference on Computer Vision, Vol. 12363: Lecture Notes in Computer Science. Springer, 248–265. DOI: .Google ScholarDigital Library
- J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin. 2004. An integrated theory of the mind. Psychol. Rev. 111, 4, 1036–1060. DOI: .Google ScholarCross Ref
- M. Atterer, T. Baumann, and D. Schlangen. 2009. No sooner said than done? Testing incrementality of semantic interpretations of spontaneous speech. In Proceedings of INTERSPEECH 2009. ISCA, 1855–1858.Google Scholar
- P. E. Baxter, J. de Greeff, and T. Belpaeme. 2013. Cognitive architecture for human–robot interaction: Towards behavioural alignment. Biol. Inspired Cogn. Archit. 6, 30–39. BICA 2013: Papers from the Fourth Annual Meeting of the BICA Society. https://www.sciencedirect.com/science/article/pii/S2212683X1300056X. DOI: .Google ScholarCross Ref
- E. Bevacqua, K. Prepin, R. Niewiadomski, E. de Sevin, and C. Pelachaud. 2010. Greta: Towards an interactive conversational virtual companion. In Close Engagements with Artificial Companions.John Benjamins, 143–156. DOI: .Google ScholarCross Ref
- T. Bickmore, D. Schulman, and L. Yin. 2010. Maintaining engagement in long-term interventions with relational agents. Appl. Artif. Intell. 24, 6, 648–666. DOI: .Google ScholarDigital Library
- A. Bono, A. Augello, G. Pilato, F. Vella, and S. Gaglio. 2020. An ACT-R based humanoid social robot to manage storytelling activities. Robotics 9, 2, 25. https://www.mdpi.com/2218-6581/9/2/25. DOI: .Google ScholarCross Ref
- T. Bosse, T. Hartmann, R. A. Blankendaal, N. Dokter, M. Otte, and L. Goedschalk. 2018. Virtually bad: A study on virtual agents that physically threaten human beings. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1258–1266.Google Scholar
- C. Breazeal, A. Brooks, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, A. Lockerd, and D. Chilongo. 2004. Tutelage and collaboration for humanoid robots. Int. J. Humanoid Robot. 1, 2, 315–348. DOI: .Google ScholarCross Ref
- J. Broekens. 2021. Emotion. In B. Lugrin, C. Pelachaud, and D. Traum (Eds.), The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Press, 349–384. DOI: .Google ScholarDigital Library
- H. Buschmeier, T. Baumann, B. Dosch, S. Kopp, and D. Schlangen. 2012. Combining incremental language generation and incremental speech synthesis for adaptive information presentation. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, 295–303. https://aclanthology.org/W12-1641.Google Scholar
- J. Cassell. 2001. Embodied conversational agents: Representation and intelligence in user interfaces. AI Mag. 22, 4, 67–67. DOI: .Google ScholarDigital Library
- J. Cassell, T. Bickmore, L. Campbell, H. Vilhjalmsson, and H. Yan. 2000. Human conversation as a system framework: Designing embodied conversational agents. In S. Prevost, E. Churchill, J. Cassell and J. Sullivan (Eds.), Embodied Conversational Agents, Chapter 2. MIT Press, 29–63.Google Scholar
- J. Cassell, H. H. Vilhjálmsson, and T. Bickmore. 2004. BEAT: The Behavior Expression Animation Toolkit. In Life-Like Characters. Springer, 163–185. DOI: .Google ScholarCross Ref
- C. Chao and A. L. Thomaz. 2013. Controlling social dynamics with a parametrized model of floor regulation. J. Hum.-Robot Interact. 2, 1, 4–29. DOI: .Google ScholarDigital Library
- C.-C. Chiu and S. Marsella. 2011. How to train your avatar: A data driven approach to gesture generation. In International Workshop on Intelligent Virtual Agents, Vol. 6895: Lecture Notes in Computer Science. Springer, 127–140. DOI: .Google ScholarCross Ref
- H. H. Clark and M. A. Krych. 2004. Speaking while monitoring addressees for understanding. J. Mem. Lang. 50, 1, 62–81. DOI: .Google ScholarCross Ref
- Consequential Robotics. 2020. MiRo-E. Retrieved March 26, 2021, from http://consequentialrobotics.com/miro-beta.Google Scholar
- N. Crook, D. Field, C. Smith, S. Harding, S. Pulman, M. Cavazza, D. Charlton, R. Moore, and J. Boye. 2012. Generating context-sensitive ECA responses to user barge-in interruptions. J. Multimodal User Interfaces 6, 1, 13–25. DOI: .Google ScholarCross Ref
- W. Dodd and R. Gutierrez. 2005. The role of episodic memory and emotion in a cognitive robot. In ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005. IEEE, 692–697. DOI: .Google ScholarCross Ref
- B. R. Duffy, M. Dragone, and G. M. O’Hare. 2005. Social robot architecture: A framework for explicit social interaction. In Android Science: Towards Social Mechanisms, CogSci 2005 Workshop. Stresa, Italy, 3–4.Google Scholar
- Y. Gal and Z. Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In M. F. Balcan and K. Q. Weinberger (Eds.), Proceedings of the 33rd International Conference on Machine Learning, Volume 48 of Proceedings of Machine Learning Research. PMLR, New York, NY, 1050–1059. http://proceedings.mlr.press/v48/gal16.html.Google Scholar
- J. Gratch, J. Rickel, E. André, J. Cassell, E. Petajan, and N. Badler. 2002. Creating interactive virtual humans: Some assembly required. IEEE Intell. Syst. 17, 4, 54–63. DOI: .Google ScholarDigital Library
- T. Han, C. Kennington, and D. Schlangen. 2018. Placing objects in gesture space: Toward incremental interpretation of multimodal spatial descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. https://ojs.aaai.org/index.php/AAAI/article/view/11974.Google Scholar
- Hanson-Robotics. 2007. Zeno. Retrieved March 26, 2021, from https://www.hansonrobotics.com/zeno/.Google Scholar
- D. Hasegawa, N. Kaneko, S. Shirakawa, H. Sakuta, and K. Sumi. 2018. Evaluation of speech-to-gesture generation using bi-directional LSTM network. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. ACM, 79–86. DOI: .Google ScholarDigital Library
- T. Hassan and S. Kopp. 2020. Towards an interaction-centered and dynamically constructed episodic memory for social robots. In Companion of the 2020 ACM/IEEE International Conference on Human–Robot Interaction. ACM, 233–235. DOI: .Google ScholarDigital Library
- A. Heloir and M. Kipp. 2010. Real-time animation of interactive agents: Specification and realization. Appl. Artif. Intell. 24, 6, 510–529. DOI: .Google ScholarDigital Library
- A. Holroyd and C. Rich. 2012. Using the Behavior Markup Language for human–robot interaction. In 2012 7th ACM/IEEE International Conference on Human–Robot Interaction (HRI). IEEE, 147–148. DOI: .Google ScholarDigital Library
- M. E. Hoque, M. Courgeon, J.-C. Martin, B. Mutlu, and R. W. Picard. 2013. MACH: My automated conversation coach. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp ’13. Association for Computing Machinery, New York, NY, 697–706. ISBN 9781450317702. DOI: .Google ScholarDigital Library
- C. Huang and B. Mutlu. 2014. Learning-based modeling of multimodal behaviors for humanlike robots. In 2014 9th ACM/IEEE International Conference on Human–Robot Interaction (HRI). IEEE, 57–64.Google Scholar
- C. T. Ishi, D. Machiyashiki, R. Mikata, and H. Ishiguro. 2018. A speech-driven hand gesture generation method and evaluation in android robots. IEEE Robot. Autom. Lett. 3, 4, 3757–3764. DOI: .Google ScholarCross Ref
- K. Janowski, H. Ritschel, and E. André. 2022. Adaptive artificial personalities. In B. Lugrin, C. Pelachaud, and D. Traum (Eds.), The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application. ACM Press, 155–193. DOI: .Google ScholarDigital Library
- P. Jonell, T. Kucherenko, G. E. Henter, and J. Beskow. 2020. Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. ACM, 1–8. DOI: .Google ScholarDigital Library
- Z. Kasap and N. Magnenat-Thalmann. 2010. Towards episodic memory-based long-term affective interaction with a human-like robot. In 19th International Symposium in Robot and Human Interactive Communication. IEEE, 452–457. DOI: .Google ScholarCross Ref
- J. K?dzierski, R. Muszyñski, C. Zoll, A. Oleksy, and M. Frontkiewicz. 2013. EMYS–Emotive head of a social robot. Int. J. Soc. Robot. 5, 237–249. DOI: .Google ScholarCross Ref
- A. Kendall and Y. Gal. 2017. What uncertainties do we need in Bayesian deep learning for computer vision? arXiv:1703.04977. Retrieved from abs/1703.04977.Google Scholar
- D. E. Kieras and D. E. Meyer. 1997. An overview of the epic architecture for cognition and performance with application to human–computer interaction. Hum.–Comput. Interact. 12, 4, 391–438. DOI: .Google ScholarDigital Library
- S. Kopp. 2013. Gestures, postures, gaze, and movements in computer science: Embodied agents. Body–Language–Communication: An International Handbook on Multimodality in Human Interaction. Walter de Gruyter, 1948–1955. DOI: .Google ScholarCross Ref
- S. Kopp and N. Krämer. 2021. Revisiting human–agent communication: The importance of joint co-construction and understanding mental states. Front. Psychol. 12, 597. DOI: .Google ScholarCross Ref
- S. Kopp and I. Wachsmuth. 2004. Synthesizing multimodal utterances for conversational agents. Comput. Animat. Virtual Worlds 15, 1, 39–52. DOI: .Google ScholarCross Ref
- S. Kopp, B. Krenn, S. Marsella, A. N. Marshall, C. Pelachaud, H. Pirker, K. R. Thórisson, and H. Vilhjálmsson. 2006. Towards a common framework for multimodal generation: The Behavior Markup Language. In International Workshop on Intelligent Virtual Agents, Vol. 4133: Lecture Notes in Computer Science. Springer, 205–217. DOI: .Google ScholarDigital Library
- S. Kopp, H. van Welbergen, R. Yaghoubzadeh, and H. Buschmeier. 2014. An architecture for fluid real-time conversational agents: Integrating incremental output generation and input processing. J. Multimodal User Interfaces 8, 1, 97–108. DOI: .Google ScholarCross Ref
- S. Kopp, M. Brandt, H. Buschmeier, K. Cyra, F. Freigang, N. Krämer, F. Kummert, C. Opfermann, K. Pitsch, L. Schillingmann, C. Straßmann, E. Wall, and R. Yaghoubzadeh. 2018. Conversational assistants for elderly users—The importance of socially cooperative dialogue. In E. André, T. Bickmore, S. Vrochidis, and L. Wanner (Eds.), Proceedings of the AAMAS Workshop on Intelligent Conversation Agents in Home and Geriatric Care Applications, CEUR Workshop Proceedings. RWTH, 10–17.Google Scholar
- T. Kucherenko, P. Jonell, S. van Waveren, G. E. Henter, S. Alexandersson, I. Leite, and H. Kjellström. 2020. Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 International Conference on Multimodal Interaction. ACM, 242–250. DOI: .Google ScholarDigital Library
- J. E. Laird. 2008. Extending the SOAR cognitive architecture. In Artificial General Intelligence 2008: Proceedings of the First AGI Conference. IOS, 224–235.Google Scholar
- J. E. Laird. 2019. The Soar Cognitive Architecture. The MIT Press, Cambridge, MA. ISBN 9780262538534.Google Scholar
- J. E. Laird, K. R. Kinkade, S. Mohan, and J. Z. Xu. 2012. Cognitive robotics using the Soar cognitive architecture. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence. Association for the Advancement of Artificial Intelligence, 46–54.Google Scholar
- J. L. Lakin, V. E. Jefferis, C. M. Cheng, and T. L. Chartrand. 2003. The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. J. Nonverbal Behav. 27, 3, 145–162. DOI: .Google ScholarCross Ref
- N. Leßmann, S. Kopp, and I. Wachsmuth. 2008. Situated interaction with a virtual human-perception, action, and cognition. In Situated Communication. De Gruyter Mouton, 287–324. DOI: .Google ScholarCross Ref
- S. C. Levinson and F. Torreira. 2015. Timing in turn-taking and its implications for processing models of language. Front. Psychol. 6, 731. DOI: .Google ScholarCross Ref
- M. Lhommet, Y. Xu, and S. Marsella. 2015. Cerebella: Automatic generation of nonverbal behavior for virtual humans. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 29. Association for the Advancement of Artificial Intelligence, 4303–4304.Google Scholar
- C. L. Lisetti and A. Marpaung. 2007. Affective cognitive modeling for autonomous agents based on Scherer’s emotion theory. In C. Freksa, M. Kohlhase, and K. Schill (Eds.), KI 2006: Advances in Artificial Intelligence, Vol. 4314: Lecture Notes in Computer Science. Springer, Berlin, 19–32. ISBN 978-3-540-69912-5. DOI: .Google ScholarCross Ref
- B. Lugrin and M. Rehm. 2021. Culture for socially interactive agents. In B. Lugrin, C. Pelachaud, and D. Traum (Eds.), The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Press, 173–211. DOI: .Google ScholarDigital Library
- B. Lugrin, C. Pelachaud, and D. Traum. (Eds.). 2021. The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Press, 538 pages. DOI: .Google ScholarDigital Library
- M. Malfaz, Castro-Gonzalez, R. Barber, and M. A. Salichs. 2011. A biologically inspired architecture for an autonomous and social robot. IEEE Trans. Auton. Ment. Dev. 3, 3, 232–246. DOI: .Google ScholarDigital Library
- Y. Matsuyama, A. Bhardwaj, R. Zhao, O. Romeo, S. Akoju, and J. Cassell. 2016. Socially-aware animated intelligent personal assistant agent. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, 224–227.Google Scholar
- G. Metta, G. Sandini, D. Vernon, L. Natale, and F. Nori. 2008. The iCub humanoid robot: An open platform for research in embodied cognition. In Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, PerMIS ’08. Association for Computing Machinery, New York, NY, 50–56. ISBN 9781605582931. DOI: .Google ScholarDigital Library
- C. Moulin-Frier, T. Fischer, M. Petit, G. Pointeau, J. Y. Puigbo, U. Pattacini, S. C. Low, D. Camilleri, P. Nguyen, M. Hoffmann, H. J. Chang, M. Zambelli, A. L. Mealier, A. Damianou, G. Metta, T. J. Prescott, Y. Demiris, P. F. Dominey, and P. F. M. J. Verschure. 2018. DAC-h3: A proactive robot cognitive architecture to acquire and express knowledge about the world and the self. IEEE Trans. Cogn. Dev. Syst. 10, 4, 1005–1022. DOI: .Google ScholarCross Ref
- V. Ng-Thow-Hing, P. Luo, and S. Okita. 2010. Synchronized gesture and speech production for humanoid robots. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 4617–4624. DOI: .Google ScholarCross Ref
- R. Niewiadomski, M. Mancini, and S. Piana. 2013. Human and virtual agent expressive gesture quality analysis and synthesis. Coverbal Synchrony in Human–Machine Interaction. CRC Press, 269–292.Google Scholar
- A. Nijholt, D. Reidsma, H. van Welbergen, R. op den Akker, and Z. Ruttkay. 2008. Mutually coordinated anticipatory multimodal interaction. In Verbal and Nonverbal Features of Human–Human and Human–Machine Interaction, Vol. 5042: Lecture Notes in Computer Science. Springer, 70–89. DOI: .Google ScholarDigital Library
- A. M. Nuxoll and J. E. Laird. 2007. Extending cognitive architecture with episodic memory. In Proceedings of the 22nd National Conference on Artificial intelligence, AAAI’07. Association for the Advancement of Artificial Intelligence, 1560–1565.Google Scholar
- A. Papangelis, R. Zhao, and J. Cassell. 2014. Towards a computational architecture of dyadic rapport management for virtual agents. In International Conference on Intelligent Virtual Agents, Vol. 8637: Lecture Notes in Computer Science. Springer, 320–324. DOI: .Google ScholarCross Ref
- H. W. Park, I. Grover, S. Spaulding, L. Gomez, and C. Breazeal. 2019. A model-free affective reinforcement learning approach to personalization of an autonomous social robot companion for early literacy education. Proc. AAAI Conf. Artif. Intell. 33, 1, 687–694. DOI: .Google ScholarDigital Library
- C. Pelachaud, C. Busso, and D. Heylen. 2021. Multimodal behavior modeling for socially interactive agents. In B. Lugrin, C. Pelachaud, and D. Traum (Eds.), The Handbook on Socially Interactive Agents: 20 years of Research on Embodied 110Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Press, 259–310. DOI: .Google ScholarDigital Library
- J. Pöppel and S. Kopp. 2018. Satisficing models of Bayesian theory of mind for explaining behavior of differently uncertain agents. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018). International Foundation for Autonomous Agents and Multiagent System, 470–478.Google Scholar
- F. Rabe and I. Wachsmuth. 2013. Enhancing human computer interaction with episodic memory in a virtual guide. In Proceedings of the 15th International Conference on Human-Computer Interaction: Interaction Modalities and Techniques - Volume Part IV (HCI’13). Springer-Verlag, Berlin, Heidelberg, 117–125. DOI: .Google ScholarDigital Library
- Robopec. 2021. Reeti: An expressive and communicating robot! Retrieved March 25, 2021, from http://www.reeti.fr/index.php/en/.Google Scholar
- M. Salem, S. Kopp, I. Wachsmuth, K. Rohlfing, and F. Joublin. 2012. Generation and evaluation of communicative robot gesture. Int. J. Soc. Robot. 4, 2, 201–217. DOI: .Google ScholarCross Ref
- M. A. Salichs, R. Barber, A. M. Khamis, M. Malfaz, J. F. Gorostiza, R. Pacheco, R. Rivas, A. Corrales, E. Delgado, and D. Garcia. 2006. Maggie: A robotic platform for human–robot social interaction. In 2006 IEEE Conference on Robotics, Automation and Mechatronics. IEEE, 1–7. DOI: .Google ScholarCross Ref
- C. Saund and S. Marsella. 2021. Gesture generation. In B. Lugrin, C. Pelachaud, and D. Traum (Eds.), The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition. ACM Press, 213–258. DOI: .Google ScholarDigital Library
- D. Schlangen and G. Skantze. 2011. A general, abstract model of incremental dialogue processing. Dialogue Discourse 2, 1, 83–111. DOI: .Google ScholarCross Ref
- D. Schlangen, T. Baumann, H. Buschmeier, S. Kopp, G. Skantze, and R. Yaghoubzadeh. 2010. Middleware for incremental processing in conversational agents. In Proceedings of SIGDIAL 2010: The 11th Annual Meeting of the Special Interest Group in Discourse and Dialogue. Association for Computational Linguistics, 51–54.Google Scholar
- D. Seuss, T. Hassan, A. Dieckmann, M. Unfried, K. R. R. Scherer, M. Mortillaro, and J.-U. Garbas. 2021. Automatic estimation of action unit intensities and inference of emotional appraisals. IEEE Trans. Affect. Comput. 1–1. DOI: .Google ScholarDigital Library
- T. Shibata. 2012. Therapeutic seal robot as biofeedback medical device: Qualitative and quantitative evaluations of robot therapy in dementia care. Proc. IEEE 100, 8, 2527–2538. DOI: .Google ScholarCross Ref
- G. Skantze. 2020. Turn-taking in conversational systems and human–robot interaction: A review. Comput. Speech Lang. 67, 101178. DOI: .Google ScholarCross Ref
- G. Skantze and A. Hjalmarsson. 2013. Towards incremental speech generation in conversational systems. Comput. Speech Lang. 27, 1, 243–262. DOI: .Google ScholarDigital Library
- SoftBank-Robotics. 2021a. NAO the humanoid and programmable robot. Retrieved April 11, 2021, from https://www.softbankrobotics.com/emea/en/nao.Google Scholar
- SoftBank-Robotics. 2021b. Pepper the humanoid and programmable robot. Retrieved April 11, 2021, from https://www.softbankrobotics.com/emea/en/pepper.Google Scholar
- S. Stange, H. Buschmeier, T. Hassan, C. Ritter, and S. Kopp. 2019. Towards self-explaining social robots: Verbal explanation strategies for a needs-based architecture. In Proceedings of the Workshop on Cognitive Architectures for HRI: Embodied Models of Situated Natural Language Interactions (MM-Cog). Montréal, Canada.Google Scholar
- J. A. Starzyk and J. Graham. 2017. MLECOG: Motivated learning embodied cognitive architecture. IEEE Syst. J. 11, 3, 1272–1283. DOI: .Google ScholarCross Ref
- R. Sun. 2007. The importance of cognitive architectures: An analysis based on CLARION. J. Exp. Theor. Artif. Intell. 19, 2, 159–193. DOI: .Google ScholarDigital Library
- A. Tanevska, F. Rea, G. Sandini, L. Cañamero, and A. Sciutti. 2019. Eager to learn vs. quick to complain? How a socially adaptive robot architecture performs with different robot personalities. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 365–371. DOI: .Google ScholarDigital Library
- C. Teufel and B. Nanay. 2017. How to (and how not to) think about top–down influences on visual perception. Conscious. Cogn. 47, 17–25. DOI: .Google ScholarCross Ref
- M. Thiebaux, S. Marsella, A. N. Marshall, and M. Kallmann. 2008. SmartBody: Behavior realization for embodied conversational agents. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, Vol. 1. International Foundation for Autonomous Agents and Multiagent Systems, 151–158.Google Scholar
- J. G. Trafton, L. M. Hiatt, A. M. Harrison, F. P. Tamborello, S. S. Khemlani, and A. C. Schultz. 2013. ACT-R/E: An embodied cognitive architecture for human–robot interaction. J. Hum.-Robot Interact. 2, 1, 30–55. DOI: .Google ScholarDigital Library
- D. Traum, D. DeVault, J. Lee, Z. Wang, and S. Marsella. 2012. Incremental dialogue understanding and feedback for multiparty, multimodal conversation. In International Conference on Intelligent Virtual Agents, Vol. 7502: Lecture Notes in Computer Science. Springer, 275–288. DOI: .Google ScholarDigital Library
- H. van Welbergen, R. Yaghoubzadeh, and S. Kopp. 2014. AsapRealizer 2.0: The next steps in fluent behavior realization for ECAs. In International Conference on Intelligent Virtual Agents, Vol. 8637: Lecture Notes in Computer Science. Springer, 449–462. DOI: .Google ScholarCross Ref
- H. Vilhjálmsson, N. Cantelmo, J. Cassell, N. E. Chafai, M. Kipp, S. Kopp, M. Mancini, S. Marsella, A. N. Marshall, C. Pelachaud, Z. Ruttkay, K. R. Thórisson, H. van Welbergen, and R. J. van der Werf. 2007. The Behavior Markup Language: Recent developments and challenges. In International Workshop on Intelligent Virtual Agents, Vol. 4722: Lecture Notes in Computer Science. Springer, 99–111. DOI: .Google ScholarDigital Library
- P. Wolfert, N. Robinson, and T. Belpaeme. 2022. A review of evaluation practices of gesture generation in embodied conversational agents. IEEE Trans. Hum.-Mach. Syst. 52, 3, 379–389. DOI: .Google ScholarCross Ref
- Y. Yoon, W. Ko, M. Jang, J. Lee, J. Kim, and G. Lee. 2019. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 4303–4309. DOI: .Google ScholarDigital Library
- Y. Yoon, B. Cha, J.-H. Lee, M. Jang, J. Lee, J. Kim, and G. Lee. 2020. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Trans. Graph. 39, 6, 1–16. DOI: .Google ScholarDigital Library
Index Terms
- The Fabric of Socially Interactive Agents: Multimodal Interaction Architectures
Recommendations
Children's and adults' multimodal interaction with 2D conversational agents
CHI EA '05: CHI '05 Extended Abstracts on Human Factors in Computing SystemsFew systems combine both Embodied Conversational Agents (ECAs) and multimodal input. This research aims at modeling the behavior of adults and children during their multimodal interaction with ECAs. A Wizard-of-Oz setup was used and users were video-...
Embodied conversational agents in Wizard-of-Oz and multimodal interaction applications
COST 2102'07: Proceedings of the 2007 COST action 2102 international conference on Verbal and nonverbal communication behavioursEmbodied conversational agents employed in multimodal interaction applications have the potential to achieve similar properties as humans in faceto-face conversation. They enable the inclusion of verbal and nonverbal communication. Thus, the degree of ...
Comments