skip to main content
10.1145/3571884.3597139acmconferencesArticle/Chapter ViewAbstractPublication PagescuiConference Proceedingsconference-collections
research-article

The Bot on Speaking Terms: The Effects of Conversation Architecture on Perceptions of Conversational Agents

Published: 19 July 2023 Publication History

Abstract

Conversational agents mimic natural conversation to interact with users. Since the effectiveness of interactions strongly depends on users’ perception of agents, it is crucial to design agents’ behaviors to provide the intended user perceptions. Research on human-agent and human-human communication suggests that speech specifics are associated with perceptions of communicating parties, but there is a lack of systematic understanding of how speech specifics of agents affect users’ perceptions. To address this gap, we present a framework outlining the relationships between elements of agents’ conversation architecture (dialog strategy, content affectiveness, content style and speech format) and aspects of users’ perception (interaction, ability, sociability and humanness). Synthesized based on literature reviewed from the domains of HCI, NLP and linguistics (n=57), this framework demonstrates both the identified relationships and the areas lacking empirical evidence. We discuss the implications of the framework for conversation design and highlight the inconsistencies with terminology and measurements.

References

[1]
Pierre Y Andrews. 2012. System personality and persuasion in human-computer dialogue. ACM Transactions on Interactive Intelligent Systems (TiiS) 2, 2 (2012), 1–27.
[2]
Zahra Ashktorab, Mohit Jain, Q Vera Liao, and Justin D Weisz. 2019. Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.
[3]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics 1 (2009), 71–81.
[4]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health 11, 4 (2019), 589–597.
[5]
Jessy Ceha and Edith Law. 2022. Expressive Auditory Gestures in a Voice-Based Pedagogical Agent. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13.
[6]
Jessy Ceha, Ken Jen Lee, Elizabeth Nilsen, Joslin Goh, and Edith Law. 2021. Can a Humorous Conversational Agent Enhance Learning Experience and Outcomes?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[7]
Sam WT Chan, Tamil Selvan Gunasekaran, Yun Suen Pai, Haimo Zhang, and Suranga Nanayakkara. 2021. KinVoices: Using Voices of Friends and Family in Voice Interfaces. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
[8]
Laurianne Charrier, Alisa Rieger, Alexandre Galdeano, Amélie Cordier, Mathieu Lefort, and Salima Hassas. 2019. The rope scale: a measure of how empathic a robot is perceived. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 656–657.
[9]
Eugene Cho, Maria D Molina, and Jinping Wang. 2019. The effects of modality, device, and task differences on perceived human likeness of voice-activated virtual assistants. Cyberpsychology, Behavior, and Social Networking 22, 8 (2019), 515–520.
[10]
Dasom Choi, Daehyun Kwak, Minji Cho, and Sangsu Lee. 2020. " Nobody speaks that fast!" An empirical study of speech rate in conversational agents for people with vision impairments. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[11]
Leigh Clark, Philip Doyle, Diego Garaialde, Emer Gilmartin, Stephan Schlögl, Jens Edlund, Matthew Aylett, João Cabral, Cosmin Munteanu, Justin Edwards, 2019. The state of speech in HCI: Trends, themes and challenges. Interacting with Computers 31, 4 (2019), 349–371.
[12]
Leigh Clark, Nadia Pantidi, Orla Cooney, Philip Doyle, Diego Garaialde, Justin Edwards, Brendan Spillane, Emer Gilmartin, Christine Murad, Cosmin Munteanu, 2019. What makes a good conversation? Challenges in designing truly conversational agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[13]
Samuel Rhys Cox and Wei Tsang Ooi. 2022. Does Chatbot Language Formality Affect Users’ Self-Disclosure?. In Proceedings of the 4th Conference on Conversational User Interfaces. 1–13.
[14]
Andrea Cuadra, Shuran Li, Hansol Lee, Jason Cho, and Wendy Ju. 2021. My bad! repairing intelligent voice assistant errors improves interaction. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–24.
[15]
Karl Daher, Jacky Casas, Omar Abou Khaled, and Elena Mugellini. 2020. Empathic chatbot response for medical assistance. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. 1–3.
[16]
Stephan Diederich, Max Janssen-Müller, Alfred Benedikt Brendel, and Stefan Morana. 2019. Emulating empathetic behavior in online service encounters with sentiment-adaptive responses: insights from an experiment with a conversational agent. (2019).
[17]
Philip R Doyle, Justin Edwards, Odile Dumbleton, Leigh Clark, and Benjamin R Cowan. 2019. Mapping perceptions of humanness in intelligent personal assistant interaction. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–12.
[18]
Mateusz Dubiel, Alessandra Cervone, and Giuseppe Riccardi. 2019. Inquisitive mind: a conversational news companion. In Proceedings of the 1st International Conference on Conversational User Interfaces. 1–3.
[19]
Mateusz Dubiel, Martin Halvey, Pilar Oplustil Gallegos, and Simon King. 2020. Persuasive synthetic speech: Voice perception and user behaviour. In Proceedings of the 2nd Conference on Conversational User Interfaces. 1–9.
[20]
Ela Elsholz, Jon Chamberlain, and Udo Kruschwitz. 2019. Exploring language style in chatbots to increase perceived product value and user engagement. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval. 301–305.
[21]
Ahmed Fadhil, Gianluca Schiavo, Yunlong Wang, and Bereket A Yilma. 2018. The effect of emojis when interacting with conversational interface assisted health coaching system. In Proceedings of the 12th EAI international conference on pervasive computing technologies for healthcare. 378–383.
[22]
Pedro Guillermo Feijóo-García, Mohan Zalake, Alexandre Gomes de Siqueira, Benjamin Lok, and Felix Hamza-Lup. 2021. Effects of Virtual Humans’ Gender and Spoken Accent on Users’ Perceptions of Expertise in Mental Wellness Conversations. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents. 68–75.
[23]
Jasper Feine, Ulrich Gnewuch, Stefan Morana, and Alexander Maedche. 2019. A taxonomy of social cues for conversational agents. International Journal of Human-Computer Studies 132 (2019), 138–161.
[24]
Sarah E Finch and Jinho D Choi. 2020. Towards unified dialogue system evaluation: A comprehensive analysis of current evaluation protocols. arXiv preprint arXiv:2006.06110 (2020).
[25]
Ulrich Gnewuch, Stefan Morana, Marc Adam, and Alexander Maedche. 2018. Faster is not always better: understanding the effect of dynamic response delays in human-chatbot interaction. (2018).
[26]
Ulrich Gnewuch, Stefan Morana, Marc TP Adam, and Alexander Maedche. 2018. ‘The Chatbot is typing…’–The Role of Typing Indicators in Human-Chatbot Interaction. In Proceedings of the 17th Annual Pre-ICIS Workshop on HCI Research in MIS. 0–5.
[27]
Ulrich Gnewuch, Stefan Morana, Marc TP Adam, and Alexander Maedche. 2022. Opposing Effects of Response Time in Human–Chatbot Interaction. Business & Information Systems Engineering (2022), 1–19.
[28]
Kenro Go, Toshiki Onishi, Asahi Ogushi, and Akihiro Miyata. 2021. Conversational Agents Replying with a Manzai-style Joke. In Proceedings of the 33rd Australian Conference on Human-Computer Interaction. 221–230.
[29]
Samuel D Gosling, Peter J Rentfrow, and William B Swann Jr. 2003. A very brief measure of the Big-Five personality domains. Journal of Research in personality 37, 6 (2003), 504–528.
[30]
Gabriel Haas, Michael Rietzler, Matt Jones, and Enrico Rukzio. 2022. Keep it Short: A Comparison of Voice Assistants’ Response Behavior. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–12.
[31]
Florian Habler, Valentin Schwind, and Niels Henze. 2019. Effects of Smart Virtual Assistants’ Gender and Language. In Proceedings of Mensch und Computer 2019. 469–473.
[32]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
[33]
Jennifer Healey and Dalila Szostak. 2013. Relating to speech evoked car personalities. In CHI’13 Extended Abstracts on Human Factors in Computing Systems. 1653–1658.
[34]
Jennifer Hill, W Randolph Ford, and Ingrid G Farreras. 2015. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in human behavior 49 (2015), 245–250.
[35]
Rens Hoegen, Deepali Aneja, Daniel McDuff, and Mary Czerwinski. 2019. An end-to-end conversational style matching agent. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. 111–118.
[36]
TM Holtgraves, Stephen J Ross, CR Weywadt, and TL Han. 2007. Perceiving artificial social agents. Computers in human behavior 23, 5 (2007), 2163–2174.
[37]
Kate S Hone and Robert Graham. 2000. Towards a tool for the subjective assessment of speech system interfaces (SASSI). Natural Language Engineering 6, 3-4 (2000), 287–303.
[38]
Jiaxiong Hu, Yun Huang, Xiaozhu Hu, and Yingqing Xu. 2021. Enhancing the Perceived Emotional Intelligence of Conversational Agents through Acoustic Cues. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7.
[39]
Yaxin Hu, Yuxiao Qu, Adam Maus, and Bilge Mutlu. 2022. Polite or Direct? Conversation Design of a Smart Display for Older Adults Based on Politeness Theory. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.
[40]
Shen Huiyang and Wang Min. 2022. Improving Interaction Experience through Lexical Convergence: The Prosocial Effect of Lexical Alignment in Human-Human and Human-Computer Interactions. International Journal of Human–Computer Interaction 38, 1 (2022), 28–41.
[41]
Yuin Jeong, Juho Lee, and Younah Kang. 2019. Exploring effects of conversational fillers on user perception of conversational agents. In Extended abstracts of the 2019 CHI conference on human factors in computing systems. 1–6.
[42]
Iris Jestin, Joel Fischer, Maria Jose Galvez Trigo, David Large, and Gary Burnett. 2022. Effects of Wording and Gendered Voices on Acceptability of Voice Assistants in Future Autonomous Vehicles. In Proceedings of the 4th Conference on Conversational User Interfaces. 1–11.
[43]
Peter Khooshabeh, Cade McCall, Sudeep Gandhe, Jonathan Gratch, and James Blascovich. 2011. Does it matter if a computer jokes. In CHI’11 Extended Abstracts on Human Factors in Computing Systems. 77–86.
[44]
Hyeji Kim, Inchan Jung, and Youn-kyung Lim. 2022. Understanding the Negative Aspects of User Experience in Human-likeness of Voice-based Conversational Agents. In Designing Interactive Systems Conference. 1418–1427.
[45]
Jieun Kim, Woochan Kim, Jungwoo Nam, and Hayeon Song. 2020. " I Can Feel Your Empathic Voice": Effects of Nonverbal Vocal Cues in Voice User Interface. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
[46]
Soomin Kim, Joonhwan Lee, and Gahgene Gweon. 2019. Comparing data from chatbot and web surveys: Effects of platform and conversational style on survey response quality. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.
[47]
Bart P Knijnenburg and Martijn C Willemsen. 2016. Inferring capabilities of intelligent agents from their external traits. ACM Transactions on Interactive Intelligent Systems (TiiS) 6, 4 (2016), 1–25.
[48]
Ahmet Baki Kocaballi, Emre Sezgin, Leigh Clark, John M Carroll, Yungui Huang, Jina Huh-Yoo, Junhan Kim, Rafal Kocielnik, Yi-Chieh Lee, Lena Mamykina, 2022. Design and Evaluation Challenges of Conversational Agents in Health Care and Well-being: Selective Review Study. Journal of medical Internet research 24, 11 (2022), e38525.
[49]
Rafal Kocielnik, Daniel Avrahami, Jennifer Marlow, Di Lu, and Gary Hsieh. 2018. Designing for workplace reflection: a chat and voice-based conversational agent. In Proceedings of the 2018 designing interactive systems conference. 881–894.
[50]
Kazunori Komatani and Hiroshi G Okuno. 2010. Online error detection of barge-in utterances by using individual users’ utterance histories in spoken dialogue system. In Proceedings of the SIGDIAL 2010 Conference. 289–296.
[51]
Matthias Kraus, Nicolas Wagner, and Wolfgang Minker. 2020. Effects of proactive dialogue strategies on human-computer trust. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. 107–116.
[52]
Nour Kteily, Emile Bruneau, Adam Waytz, and Sarah Cotterill. 2015. The ascent of man: Theoretical and empirical evidence for blatant dehumanization.Journal of personality and social psychology 109, 5 (2015), 901.
[53]
Minha Lee, Gale Lucas, Johnathan Mell, Emmanuel Johnson, and Jonathan Gratch. 2019. What’s on Your Virtual Mind? Mind Perception in Human-Agent Negotiations. In Proceedings of the 19th ACM international conference on intelligent virtual agents. 38–45.
[54]
Yi-Chieh Lee, Naomi Yamashita, Yun Huang, and Wai Fu. 2020. " I Hear You, I Feel You": encouraging deep self-disclosure through a chatbot. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–12.
[55]
Gesa Alena Linnemann and Regina Jucks. 2018. ‘Can I Trust the Spoken Dialogue System Because It Uses the Same Words as I Do?’—Influence of Lexically Aligned Spoken Dialogue Systems on Trustworthiness and User Satisfaction. Interacting with Computers 30, 3 (2018), 173–186.
[56]
Nurul Lubis, Sakriani Sakti, Koichiro Yoshino, and Satoshi Nakamura. 2019. Positive emotion elicitation in chat-based dialogue systems. IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, 4 (2019), 866–877.
[57]
Nichola Lubold, Erin Walker, and Heather Pon-Barry. 2016. Effects of voice-adaptation and social dialogue on perceptions of a robotic learning companion. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 255–262.
[58]
Qianli Ma, Yaping Zhang, Wenti Xu, and Ronggang Zhou. 2022. Ask a Further Question or Give a List? How Should Conversational Agents Reply to Users’ Uncertain Queries. International Journal of Human–Computer Interaction (2022), 1–15.
[59]
Roger C Mayer and James H Davis. 1999. The effect of the performance appraisal system on trust for management: A field quasi-experiment.Journal of applied psychology 84, 1 (1999), 123.
[60]
Kelly McConvey, Shion Guha, and Anastasia Kuzminykh. 2023. A Human-Centered Review of Algorithms in Decision-Making in Higher Education. arXiv preprint arXiv:2302.05839 (2023).
[61]
James C McCroskey, Virginia P Richmond, and John A Daly. 1975. The development of a measure of perceived homophily in interpersonal communication. Human Communication Research 1, 4 (1975), 323–332.
[62]
Juliana Miehle, Wolfgang Minker, and Stefan Ultes. 2018. Exploring the impact of elaborateness and indirectness on user satisfaction in a spoken dialogue system. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization. 165–172.
[63]
Teruhisa Misu, Etsuo Mizukami, Yoshinori Shiga, Shinichi Kawamoto, Hisashi Kawai, and Satoshi Nakamura. 2011. Toward construction of spoken dialogue system that evokes users’ spontaneous backchannels. In Proceedings of the SIGDIAL 2011 Conference. 259–265.
[64]
Tomoki Miyamoto, Daisuke Katagami, and Yuka Shigemitsu. 2017. Improving relationships based on positive politeness between humans and life-like agents. In Proceedings of the 5th International Conference on Human Agent Interaction. 451–455.
[65]
David Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G Altman, and PRISMA Group*. 2009. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of internal medicine 151, 4 (2009), 264–269.
[66]
Joonas Moilanen, Aku Visuri, Sharadhi Alape Suryanarayana, Andy Alorwu, Koji Yatani, and Simo Hosio. 2022. Measuring the Effect of Mental Health Chatbot Personality on User Engagement. In Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia. 138–150.
[67]
Masahiro Mori, Karl F MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robotics & automation magazine 19, 2 (2012), 98–100.
[68]
Sara Moussawi, Marios Koufaris, and Raquel Benbunan-Fich. 2021. How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets 31, 2 (2021), 343–364.
[69]
Radosław Niewiadomski, Jennifer Hofmann, Jérôme Urbain, Tracey Platt, Johannes Wagner, Piot Bilal, T Ito, C Jonker, M Gini, and O Shehory. 2013. Laugh-aware virtual agent and its impact on user amusement. (2013).
[70]
Subaru Ouchi, Kazuki Mizumaru, Daisuke Sakamoto, and Tetsuo Ono. 2019. Should speech dialogue system use honorific expression?. In Proceedings of the 7th International Conference on Human-Agent Interaction. 232–233.
[71]
Heather L O’Brien, Paul Cairns, and Mark Hall. 2018. A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. International Journal of Human-Computer Studies 112 (2018), 28–39.
[72]
Gregory R Pierce, Irwin G Sarason, Barbara R Sarason, Jessica A Solky-Butzel, and Lauren C Nagle. 1997. Assessing the quality of personal relationships. Journal of Social and Personal Relationships 14, 3 (1997), 339–356.
[73]
Amon Rapp, Lorenzo Curti, and Arianna Boldi. 2021. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. International Journal of Human-Computer Studies 151 (2021), 102630.
[74]
Fabian Reinkemeier and Ulrich Gnewuch. 2022. Designing Effective Conversational Repair Strategies for Chatbots. (2022).
[75]
Quentin Roy, Moojan Ghafurian, Wei Li, and Jesse Hoey. 2021. Users, Tasks, and Conversational Agents: A Personality Study. In Proceedings of the 9th International Conference on Human-Agent Interaction. 174–182.
[76]
Elayne Ruane, Abeba Birhane, and Anthony Ventresque. 2019. Conversational AI: Social and Ethical Considerations. In AICS. 104–115.
[77]
Anna-Maria Seeger and Armin Heinzl. 2021. Chatbots often fail! Can anthropomorphic design mitigate trust loss in conversational agents for customer service?. In ECIS.
[78]
Laura Spillner and Nina Wenig. 2021. Talk to Me on My Level–Linguistic Alignment for Chatbots. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction. 1–12.
[79]
Brodrick Stigall, Jenny Waycott, Steven Baker, and Kelly Caine. 2019. Older adults’ perception and use of voice user interfaces: a preliminary review of the computing literature. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction. 423–427.
[80]
Suzanne Tolmeijer, Naim Zierau, Andreas Janson, Jalil Sebastian Wahdatehagh, Jan Marco Marco Leimeister, and Abraham Bernstein. 2021. Female by default?–exploring the effect of voice assistant gender and pitch on trait and trust attribution. In Extended abstracts of the 2021 CHI conference on human factors in computing systems. 1–7.
[81]
Michelle ME Van Pinxteren, Mark Pluymaekers, and Jos GAM Lemmink. 2020. Human-like communication in conversational agents: a literature review and research agenda. Journal of Service Management 31, 2 (2020), 203–225.
[82]
Sarah Theres Völkel and Lale Kaya. 2021. Examining user preference for agreeableness in chatbots. In Proceedings of the 3rd Conference on Conversational User Interfaces. 1–6.
[83]
Sarah Theres Völkel, Samantha Meindl, and Heinrich Hussmann. 2021. Manipulating and evaluating levels of personality perceptions of voice assistants through enactment-based dialogue design. In Proceedings of the 3rd Conference on Conversational User Interfaces. 1–12.
[84]
Sarah Theres Völkel, Ramona Schoedel, Lale Kaya, and Sven Mayer. 2022. User perceptions of extraversion in chatbots after repeated use. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.
[85]
Mirjam Wester, Matthew Aylett, Marcus Tomalin, and Rasmus Dall. 2015. Artificial personality and disfluency. In Sixteenth Annual Conference of the International Speech Communication Association.
[86]
David Westerman, Aaron C Cross, and Peter G Lindmark. 2019. I believe in a thing called bot: Perceptions of the humanness of “chatbots”. Communication Studies 70, 3 (2019), 295–312.
[87]
Lawrence R Wheeless and Janis Grotz. 1977. The measurement of trust and its relationship to self-disclosure. Human Communication Research 3, 3 (1977), 250–257.
[88]
Marilena Wilhelm, Tabea Otten, Eva Schwaetzer, and Kinga Schumacher. 2022. Keep on Smiling: An Investigation of the Influence of the Use of Emoticons by Chatbots on User Satisfaction. In Proceedings of the 4th Conference on Conversational User Interfaces. 1–6.
[89]
Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, and Jennifer Thom. 2021. Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–24.
[90]
Jie Yang, Hirofumi Kikuchi, Takatsugu Uegaki, and Hideaki Kikuchi. 2021. The effect of the repetitive utterances complexity on user’s perceived empathy and desire to continue dialogue by a chat-oriented dialogue system. In Proceedings of the 9th International Conference on Human-Agent Interaction. 241–244.
[91]
Yang Yang, Xiaojuan Ma, and Pascale Fung. 2017. Perceived emotional intelligence in virtual agents. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2255–2262.
[92]
Nima Zargham, Leon Reicherts, Michael Bonfert, Sarah Theres Völkel, Johannes Schöning, Rainer Malaka, and Yvonne Rogers. 2022. Understanding circumstances for desirable proactive behaviour of voice assistants: The proactivity Dilemma. In Proceedings of the 4th Conference on Conversational User Interfaces. 1–14.
[93]
Qingxiao Zheng, Yiliu Tang, Yiren Liu, Weizi Liu, and Yun Huang. 2022. UX research on conversational human-AI interaction: A literature review of the ACM digital library. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–24.
[94]
Qingxiaoyang Zhu, Austin Chau, Michelle Cohn, Kai-Hui Liang, Hao-Chuan Wang, Georgia Zellou, and Zhou Yu. 2022. Effects of Emotional Expressiveness on Voice Chatbot Interactions. In Proceedings of the 4th Conference on Conversational User Interfaces. 1–11.

Cited By

View all
  • (2024)Engagement With Conversational Agent–Enabled Interventions in Cardiometabolic Disease Management: Protocol for a Systematic ReviewJMIR Research Protocols10.2196/5297313(e52973)Online publication date: 7-Aug-2024
  • (2024)Unveiling Information Through Narrative In Conversational Information SeekingProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665884(1-6)Online publication date: 8-Jul-2024
  • (2024)Examining Humanness as a Metaphor to Design Voice User InterfacesProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665535(1-15)Online publication date: 8-Jul-2024
  • Show More Cited By

Index Terms

  1. The Bot on Speaking Terms: The Effects of Conversation Architecture on Perceptions of Conversational Agents

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces
        July 2023
        504 pages
        ISBN:9798400700149
        DOI:10.1145/3571884
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 19 July 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. anthropomorphized perceptions
        2. chatbots
        3. conversation architecture
        4. conversational agents
        5. natural language interface
        6. speech variations
        7. user perceptions
        8. virtual assistants

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        CUI '23
        Sponsor:
        CUI '23: ACM conference on Conversational User Interfaces
        July 19 - 21, 2023
        Eindhoven, Netherlands

        Acceptance Rates

        Overall Acceptance Rate 34 of 100 submissions, 34%

        Upcoming Conference

        CUI '25
        ACM Conversational User Interfaces 2025
        July 7 - 9, 2025
        Waterloo , ON , Canada

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)318
        • Downloads (Last 6 weeks)18
        Reflects downloads up to 20 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Engagement With Conversational Agent–Enabled Interventions in Cardiometabolic Disease Management: Protocol for a Systematic ReviewJMIR Research Protocols10.2196/5297313(e52973)Online publication date: 7-Aug-2024
        • (2024)Unveiling Information Through Narrative In Conversational Information SeekingProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665884(1-6)Online publication date: 8-Jul-2024
        • (2024)Examining Humanness as a Metaphor to Design Voice User InterfacesProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665535(1-15)Online publication date: 8-Jul-2024
        • (2024)CUI@CHI 2024: Building Trust in CUIs—From Design to DeploymentExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636287(1-7)Online publication date: 11-May-2024
        • (2023)Tickling Proactivity: Exploring the Use of Humor in Proactive Voice AssistantsProceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia10.1145/3626705.3627777(294-320)Online publication date: 3-Dec-2023

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media