skip to main content
research-article

Dialoging Resonance in Human-Chatbot Conversation: How Users Perceive and Reciprocate Recommendation Chatbot's Self-Disclosure Strategy

Published: 26 April 2024 Publication History

Abstract

Using chatbots to make recommendations is increasingly popular. The design of recommendation chatbots has mainly been taking an information-centric approach by focusing on the recommended content per se. Limited attention is on how social connection and relational strategies, such as self-disclosure from a chatbot, may influence users' perception and acceptance of the recommendation. In this work, we designed, implemented, and evaluated a social chatbot capable of performing three different levels of self-disclosure: factual information (low), cognitive opinions (medium), and emotions (high). In the evaluation, we recruited 372 participants to converse with the chatbot on two topics: movies and COVID-19 experiences. In each topic, the chatbot conducted small talks and made relevant recommendations to the topic. Participants were randomly assigned to four experimental conditions where the chatbot used factual, cognitive, emotional, and adaptive strategies to perform self-disclosures. By training a text classifier to identify users' level of self-disclosure in real-time, the adaptive chatbot can dynamically match its self-disclosure language to the level of disclosure exhibited by the users. Our results show that users reciprocate with higher-level self-disclosure when a recommendation chatbot displays emotions throughout the conversation. The utilization of emotional disclosure by the chatbot resulted in enhanced enjoyment during interactions and a more favorable perception of the bot. This, in turn, led to greater effectiveness in making recommendations, including a higher likelihood of accepting the recommendation. We discuss the understandings obtained and implications to future design.

References

[1]
Icek Ajzen. 2006. Constructing a theory of planned behavior questionnaire.
[2]
Irwin Altman and Dalmas A Taylor. 1973. Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston.
[3]
Azy Barak and Orit Gluck-Ofri. 2007. Degree and reciprocity of self-disclosure in online forums. CyberPsychology & Behavior, Vol. 10, 3 (2007), 407--417.
[4]
Christoph Bartneck, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2009a. My robotic doppelg"anger-A critical look at the uncanny valley. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 269--276.
[5]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009b. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, Vol. 1, 1 (2009), 71--81.
[6]
John H Berg and Valerian J Derlega. 1987. Themes in the study of self-disclosure. In Self-disclosure. Springer, 1--8.
[7]
Timothy W Bickmore, Laura M Pfeifer Vardoulakis, and Daniel Schulman. 2013. Tinker: a relational agent museum guide. Autonomous agents and multi-agent systems, Vol. 27, 2 (2013), 254--276.
[8]
Nancy L Collins and Lynn Carol Miller. 1994. Self-disclosure and liking: a meta-analytic review. Psychological bulletin, Vol. 116, 3 (1994), 457.
[9]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[10]
Kathryn Dindia, M Allen, R Preiss, B Gayle, and N Burrell. 2002. Self-disclosure research: Knowledge through meta-analysis. Interpersonal communication research: Advances through meta-analysis (2002), 169--185.
[11]
Martin Fishbein and Icek Ajzen. 2011. Predicting and changing behavior: The reasoned action approach. Psychology press.
[12]
Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, and Tat-Seng Chua. 2021. Advances and challenges in conversational recommender systems: A survey. arXiv preprint arXiv:2101.09459 (2021).
[13]
Jawaid A Ghani and Satish P Deshpande. 1994. Task characteristics and the experience of optimal flow in human-computer interaction. The Journal of psychology, Vol. 128, 4 (1994), 381--391.
[14]
Kurt Gray and Daniel M Wegner. 2012. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, Vol. 125, 1 (2012), 125--130.
[15]
Jeong Yeob Han, Dhavan V Shah, Eunkyung Kim, Kang Namkoong, Sun-Young Lee, Tae Joon Moon, Rich Cleland, Q Lisa Bu, Fiona M McTavish, and David H Gustafson. 2011. Empathic exchanges in online cancer support groups: distinguishing message expression and reception effects. Health communication, Vol. 26, 2 (2011), 185--197.
[16]
Khaled Hassanein and Milena Head. 2007. Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. International journal of human-computer studies, Vol. 65, 8 (2007), 689--708.
[17]
Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. INSPIRED: Toward sociable recommendation dialog systems. arXiv preprint arXiv:2009.14306 (2020).
[18]
Annabell Ho, Jeff Hancock, and Adam S Miner. 2018. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, Vol. 68, 4 (2018), 712--733.
[19]
Chin-Chang Ho and Karl F MacDorman. 2017. Measuring the uncanny valley effect. International Journal of Social Robotics, Vol. 9, 1 (2017), 129--139.
[20]
Yifeng Hu, Jacqueline Fowler Wood, Vivian Smith, and Nalova Westbrook. 2004. Friendships through IM: Examining the relationship between instant messaging and intimacy. Journal of Computer-Mediated Communication, Vol. 10, 1 (2004), JCMC10111.
[21]
Hsin-Yi Huang. 2016. Examining the beneficial effects of individual's self-disclosure on the social network site. Computers in human behavior, Vol. 57 (2016), 122--132.
[22]
Yuichiro Ikemoto, Varit Asawavetvutt, Kazuhiro Kuwabara, and Hung-Hsuan Huang. 2019. Tuning a conversation strategy for interactive recommendations in a chatbot setting. Journal of Information and Telecommunication, Vol. 3, 2 (2019), 180--195.
[23]
Sidney M Jourard. 1971. Self-disclosure. An Experimental Analysis of the Transparent Self (1971).
[24]
Chandra Khatri, Behnam Hedayatnia, Anu Venkatesh, Jeff Nunn, Yi Pan, Qing Liu, Han Song, Anna Gottardi, Sanjeev Kwatra, Sanju Pancholi, et al. 2018. Advancing the state of the art in open domain dialog systems through the alexa prize. arXiv preprint arXiv:1812.10757 (2018).
[25]
Heeyoung Kim, Sunmi Jung, and Gihwan Ryu. 2020. A Study on the Restaurant Recommendation Service App Based on AI Chatbot Using Personalization Information. International Journal of Advanced Culture Technology, Vol. 8, 4 (2020), 263--270.
[26]
J Kim. [n.,d.]. song, H.(2016). Celebrity's self-disclosure on Twitter and parasocial relationships: A mediating role of social presence. Computers in Human Behavior, Vol. 62 ( [n.,d.]), 570--577.
[27]
Sami Koivunen, Saara Ala-Luopa, Thomas Olsson, and Arja Haapakorpi. 2022. The march of Chatbots into recruitment: recruiters' experiences, expectations, and design opportunities. Computer Supported Cooperative Work (CSCW), Vol. 31, 3 (2022), 487--516.
[28]
Hamutal Kreiner and Yossi Levi-Belz. 2019. Self-Disclosure Here and Now: Combining Retrospective Perceived Assessment With Dynamic Behavioral Measures. Frontiers in psychology, Vol. 10 (2019), 558.
[29]
Jean-Philippe Laurenceau, Lisa Feldman Barrett, and Paula R Pietromonaco. 1998. Intimacy as an interpersonal process: The importance of self-disclosure, partner disclosure, and perceived partner responsiveness in interpersonal exchanges. Journal of personality and social psychology, Vol. 74, 5 (1998), 1238.
[30]
Jieon Lee and Daeho Lee. 2023. User perception and self-disclosure towards an AI psychotherapy chatbot according to the anthropomorphism of its profile picture. Telematics and Informatics (2023), 102052.
[31]
Jieon Lee, Daeho Lee, and Jae-gil Lee. 2022. Influence of Rapport and Social Presence with an AI Psychotherapy Chatbot on Users' Self-Disclosure. International Journal of Human-Computer Interaction (2022), 1--12.
[32]
Jong-Eun Roselyn Lee and Clifford I Nass. 2010. Trust in computers: The computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In Trust and technology in a ubiquitous modern environment: Theoretical and methodological perspectives. IGI Global, 1--15.
[33]
SeoYoung Lee and Junho Choi. 2017. Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies, Vol. 103 (2017), 95--105.
[34]
Yi-Chieh Lee, Naomi Yamashita, and Yun Huang. 2020a. Designing a Chatbot as a Mediator for Promoting Deep Self-Disclosure to a Real Mental Health Professional. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW1, Article 31 (may 2020), bibinfonumpages27 pages.
[35]
Yi-Chieh Lee, Naomi Yamashita, Yun Huang, and Wai Fu. 2020b. " I Hear You, I Feel You": Encouraging Deep Self-disclosure through a Chatbot. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--12.
[36]
Kaihui Liang, Austin Chau, Yu Li, Xueyuan Lu, Dian Yu, Mingyang Zhou, Ishan Jain, Sam Davidson, Josh Arnold, Minh Nguyen, et al. 2020. Gunrock 2.0: A user adaptive social conversational system. arXiv preprint arXiv:2011.08906 (2020).
[37]
Chengzhong Liu, Shixu Zhou, Yuanhao Zhang, Dingdong Liu, Zhenhui Peng, and Xiaojuan Ma. 2022. Exploring the Effects of Self-Mockery to Improve Task-Oriented Chatbot's Social Intelligence. In Designing Interactive Systems Conference. 1315--1329.
[38]
Yining Z Malloch and Jingwen Zhang. 2019. Seeing others receive support online: Effects of self-disclosure and similarity on perceived similarity and health behavior intention. Journal of health communication, Vol. 24, 3 (2019), 217--225.
[39]
Adam S Miner, Liliana Laranjo, and A Baki Kocaballi. 2020. Chatbots in the fight against the COVID-19 pandemic. NPJ digital medicine, Vol. 3, 1 (2020), 1--4.
[40]
Youngme Moon. 2000. Intimate exchanges: Using computers to elicit self-disclosure from consumers. Journal of consumer research, Vol. 26, 4 (2000), 323--339.
[41]
P Muppirishetty and Minha Lee. 2020. Voice User Interfaces for mental healthcare: Leveraging technology to help our inner voice. In 3rd ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2020.
[42]
Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72--78.
[43]
Yoo Jung Oh, Jingwen Zhang, Min-Lin Fang, and Yoshimi Fukuoka. 2021. A systematic review of artificial intelligence chatbots for promoting physical activity, healthy diet, and weight loss. International Journal of Behavioral Nutrition and Physical Activity, Vol. 18, 1 (2021), 1--25.
[44]
Filip Radlinski, Krisztian Balog, Bill Byrne, and Karthik Krishnamoorthi. 2019. Coached conversational preference elicitation: A case study in understanding movie preferences. (2019).
[45]
Abhilasha Ravichander and Alan W Black. 2018. An empirical study of self-disclosure in spoken dialogue systems. In Proceedings of the 19th annual SIGdial meeting on discourse and dialogue. 253--263.
[46]
Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. Cambridge university press Cambridge, United Kingdom.
[47]
HT Reis and P Shaver. 1988. Intimacy as an interpersonal process. Fn S. Duck (Ed.), Handbook of Personal Relationships (pp. 367--389).
[48]
Ryan E Rhodes and Deborah Hunt Matheson. 2005. Discrepancies in exercise intention and expectation: Theoretical and applied issues. Psychology & Health, Vol. 20, 1 (2005), 63--78.
[49]
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637 (2020).
[50]
Neel Shah. 2020. Emot library. https://github.com/NeelShah18/emot. [Online; accessed 01-Sep-2020].
[51]
Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods. Journal of language and social psychology, Vol. 29, 1 (2010), 24--54.
[52]
Indrani Medhi Thies, Nandita Menon, Sneha Magapu, Manisha Subramony, and Jacki O'neill. 2017. How do you want your chatbot? An exploratory Wizard-of-Oz study with young, urban Indians. In IFIP Conference on Human-Computer Interaction. Springer, 441--459.
[53]
Richard West and Lynn H Turner. 2013. Introducing communication theory: Analysis and application (2013 ed.). (2013).
[54]
Jerry S Wiggins, Paul Trapnell, and Norman Phillips. 1988. Psychometric and geometric characteristics of the Revised Interpersonal Adjective Scales (IAS-R). Multivariate Behavioral Research, Vol. 23, 4 (1988), 517--530.
[55]
Daricia Wilkinson, Öznur Alkan, Q Vera Liao, Massimiliano Mattetti, Inge Vejsbjerg, Bart P Knijnenburg, and Elizabeth Daly. 2021. Why or why not? The effect of justification styles on chatbot recommendations. ACM Transactions on Information Systems (TOIS), Vol. 39, 4 (2021), 1--21.
[56]
Dian Yu and Zhou Yu. 2021. MIDAS: A dialog act annotation scheme for open domain human machine spoken conversations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 1103--1120.
[57]
Jingwen Zhang, Yoo Jung Oh, Patrick Lange, Zhou Yu, and Yoshimi Fukuoka. 2020. Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet. Journal of Medical Internet Research, Vol. 22, 9 (2020), e22845.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
CSCW
April 2024
6294 pages
EISSN:2573-0142
DOI:10.1145/3661497
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 April 2024
Published in PACMHCI Volume 8, Issue CSCW1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affective computing
  2. ai
  3. chatbot
  4. conversational agent
  5. conversational design
  6. dialogue system
  7. nlp
  8. persuasion strategies
  9. recommendation system
  10. relational chatbot
  11. self-disclosure

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 515
    Total Downloads
  • Downloads (Last 12 months)515
  • Downloads (Last 6 weeks)64
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media