Abstract
Research on conversational chatbots for mental health applications is an emerging topic. Current work focuses primarily on the usability and acceptance of such systems. However, the human-computer trust relationship is often overlooked, even though being highly important for the acceptance of chatbots in a clinical environment. This paper presents the creation and evaluation of a trustworthy agent using relational and proactive dialogue. A pilot study with non-clinical subjects showed that a relational strategy using empathetic reactions and small-talk failed to foster human-computer trust. However, changing the initiative to be more proactive seems to be welcomed as it is perceived more reliable and understandable by users.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Ref. no.823907, https://menhir-project.eu.
References
Aoyama, K., Shimomura, H.: Real world speech interaction with a humanoid robot on a layered robot behavior control architecture. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 3814–3819. IEEE (2005)
Baraglia, J., Cakmak, M., Nagai, Y., Rao, R., Asada, M.: Initiative in robot assistance during collaborative task execution. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 67–74. IEEE Press (2016)
Bhugra, D.: Attitudes towards mental illness: a review of the literature. Acta Psychiatr. Scand. 80(1), 1–12 (1989)
Bickmore, T., Cassell, J.: Small talk and conversational storytelling in embodied conversational interface agents. In: AAAI Fall Symposium on Narrative Intelligence, pp. 87–92 (1999)
Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. (TOCHI) 12(2), 293–327 (2005)
Brave, S., Nass, C., Hutchinson, K.: Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent. Int. J. Hum Comput Stud. 62(2), 161–178 (2005)
Bretschneider, J., Kuhnert, R., Hapke, U.: Depressive symptoms among adults in Germany. J. Health Monit. 2, 77–83 (2017)
Brooke, J.: SUS: a “quick and dirty’ usability. In: Usability Evaluation in Industry, p. 189 (1996)
Cassell, J., Bickmore, T.: Negotiated collusion: modeling social language and its relationship effects in intelligent agents. User Model. User-Adapted Interact. 13(1–2), 89–132 (2002)
Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548 (2019)
Davidovi, Š., Guliani, K.: Reliable cron across the planet. Queue 13(3), 30–39 (2015)
Denecke, K., Vaaheesan, S., Arulnathan, A.: A mental health chatbot for regulating emotions (SERMO)-concept and usability test. IEEE Trans. Emerg. Top. Comput. 1, 1 (2020). https://doi.org/10.1109/TETC.2020.2974478
Fitzpatrick, K.K., Darcy, A., Vierhile, M.: Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR Mental Health 4(2), e19 (2017)
Hoffman, M.L.: Empathy and Moral Development: Implications for Caring and Justice. Cambridge University Press, Cambridge (2001)
Ischen, C., Araujo, T., Voorveld, H., van Noort, G., Smit, E.: Privacy concerns in chatbot interactions. In: Følstad, A., Araujo, T., Papadopoulos, S., Law, E.L.-C., Granmo, O.-C., Luger, E., Brandtzaeg, P.B. (eds.) CONVERSATIONS 2019. LNCS, vol. 11970, pp. 34–48. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39540-7_3
James, S.L., et al.: Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the global burden of disease study 2017. Lancet 392(10159), 1789–1858 (2018)
Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000)
Karrer, K., Glaser, C., Clemens, C., Bruder, C.: Technikaffinität erfassen-der fragebogen TA-EG. Der Mensch im Mittelpunkt technischer Systeme 8, 196–201 (2009)
Kraus, M., Fischbach, F., Jansen, P., Minker, W.: A comparison of explicit and implicit proactive dialogue strategies for conversational recommendation. In: Proceedings of the 12th Language Resources and Evaluation Conference, pp. 429–435 (2020)
Kraus, M., Wagner, N., Minker, W.: Effects of proactive dialogue strategies on human-computer trust. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 107–116 (2020)
Laranjo, L., et al.: Conversational agents in healthcare: a systematic review. J. Am. Med. Inform. Assoc. 25(9), 1248–1258 (2018)
Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Linden, M., Hautzinger, M.: Verhaltenstherapiemanual, vol. 8. Springer, Heidelberg (2008)
Ly, K.H., Ly, A.M., Andersson, G.: A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interv. 10, 39–46 (2017)
Madsen, M., Gregor, S.: Measuring human-computer trust. In: 11th Australasian Conference on Information Systems, vol. 53, pp. 6–8. Citeseer (2000)
Mattar, N., Wachsmuth, I.: Small talk is more than chit-chat. In: Glimm, B., Krüger, A. (eds.) KI 2012. LNCS (LNAI), vol. 7526, pp. 119–130. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33347-7_11
McTear, M., Callejas, Z., Griol, D.: The Conversational Interface, vol. 6. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-32967-3
Merritt, S.M., Heimbaugh, H., LaChapell, J., Lee, D.: I trust it, but i don’t know why: effects of implicit attitudes toward automation on trust in an automated system. Hum. Factors 55(3), 520–534 (2013)
Miehle, J., Ostler, D., Gerstenlauer, N., Minker, W.: The next step: intelligent digital assistance for clinical operating rooms. Innov. Surg. Sci. 2(3), 159–161 (2017)
Moray, N., Inagaki, T., Itoh, M.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6(1), 44 (2000)
Muir, B.M.: Trust in automation: Part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)
Organization, W.H.: Mental Health Atlas 2017. World Health Organization (2018)
Parker, S.K., Bindl, U.K., Strauss, K.: Making things happen: a model of proactive motivation. J. Manag. 36(4), 827–856 (2010)
Parker, S.K., Wang, Y., Liao, J.: When is proactivity wise? A review of factors that influence the individual outcomes of proactive behavior. Ann. Rev. Organ. Psychol. Organ. Behav. 6, 221–248 (2019)
Pragst, L., Ultes, S., Kraus, M., Minker, W.: Adaptive dialogue management in the kristina project for multicultural health care applications. In: Proceedings of the 19thWorkshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), pp. 202–203 (2015)
Rau, P.L.P., Li, Y., Liu, J.: Effects of a social robot’s autonomy and group orientation on human decision-making. Adv. Hum.-Comput. Interact. 2013, 11 (2013)
Reese, W.: Nginx: the high-performance web server and reverse proxy. Linux J. 2008(173) (2008)
Rotter, J.B.: Interpersonal trust, trustworthiness, and gullibility. Am. Psychol. 35(1), 1 (1980)
Stanton, N.A., Walker, G.H., Young, M.S., Kazi, T., Salmon, P.M.: Changing drivers’ minds: the evaluation of an advanced driver coaching system. Ergonomics 50(8), 1209–1234 (2007)
Vaidyam, A.N., Wisniewski, H., Halamka, J.D., Kashavan, M.S., Torous, J.B.: Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can. J. Psychiatry 64(7), 456–464 (2019)
Watson, D., Clark, L.A., Tellegen, A.: Development and validation of brief measures of positive and negative affect: the PANAS scales. J. Pers. Soc. Psychol. 54(6), 1063 (1988)
Acknowledgements
This research has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 823907 (MENHIR: Mental health monitoring through interactive conversations https://menhir-project.eu).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kraus, M., Seldschopf, P., Minker, W. (2021). Towards the Development of a Trustworthy Chatbot for Mental Health Applications. In: Lokoč, J., et al. MultiMedia Modeling. MMM 2021. Lecture Notes in Computer Science(), vol 12573. Springer, Cham. https://doi.org/10.1007/978-3-030-67835-7_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-67835-7_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67834-0
Online ISBN: 978-3-030-67835-7
eBook Packages: Computer ScienceComputer Science (R0)