Abstract
Trust is an important factor influencing user acceptance of high-tech products. As the artificial intelligence and natural language processing develop, all kinds of conversational agents (chatbot) have appeared around us. These chatbots are able to provide people with convenient services such as ordering food, stock recommendations, fund diagnostics. However, it is still not clear how to make users feel chatbot trustworthy. In this study, we aimed to explore a set of design principles to build trust between users and conversational agents. Based on extensive research on trust, we proposed five design semantics and 10 design principles, and verified their effectiveness through experiments. The result of experiment suggest that our design principles can improve users’ trust towards chatbot, thus provided guidance and suggestions for designing more trustworthy chatbots in the future.





Similar content being viewed by others
References
Botsman, R.: The battle for trust: institutions versus strangers: who can you trust? How technology brought us together and why it could drive us apart. Church Commun. Culture 3(2), 189–191 (2018)
Candello, H., Pinhanez, C., Figueiredo, F. (2017) Typefaces and the perception of humanness in natural language chatbots. In: Proceedings of the 2017 chi conference on human factors in computing systems. pp. 3476–3487.
Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Cowan, B.R. (2019) What makes a good conversation? Challenges in designing truly conversational agents. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. pp. 1–12.
Corti, K., Gillespie, A.: Co-constructing intersubjectivity with artificial conversational agents: people are more likely to initiate repairs of misunderstandings with agents represented as human. Comput. Human Behav. 58, 431–442 (2016)
Dumouchel, P., Damiano, L.: Living with robots. Harvard University Press, Cambridge (2017)
Følstad, A., Nordheim, C.B., Bjørkli, C.A.: What makes users trust a chatbot for customer service? An exploratory interview study. In: International Conference on internet science, pp. 194–208. Springer, Cham (2018)
Hill, J., Ford, W.R., Farreras, I.G.: Real conversations with artificial intelligence: a comparison between human–human online conversations and human–chatbot conversations. Comput. Human Behav. 49, 245–250 (2015)
Lankton, N., Harrison McKnight, D., Tripp, J.: Technology, humanness, and trust: rethinking trust in technology. J. Assoc. Inf. Syst. 16(10), 880–918 (2015)
Mayer, R.C., Davis, J.H., David Schoorman, F.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)
Mcallister, D.J.: Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 38(1), 24–59 (1995)
Mcknight, D.H., Choudhury, V., Kacmar, C.: Developing and Validating Trust Measures For E-Commerce: An Integrative Typology. Inf. Syst. Res. 13(3), 344–359 (2002)
Mcknight, Dh., Carter, M., Thatcher, Jb., Clay, Pf.: Trust in a specific technology: an investigation in its components and measures. ACM Trans. Manag. Inf. Syst. 2(2), 1–12 (2011)
Mone, G.: The edge of the uncanny. Commun. ACM 59, 17–19 (2016)
Nordheim, C.B., Følstad, A., Bjørkli, C.A.: An initial model of trust in chatbots for customer service—findings from a questionnaire study. Interact. Comput. 31(3), 317–335 (2019)
Park, S., Lee, S.: A study on the effect of intimacy between conversational agents and users on reliability—focused on self exposure, small talk and anthropomorphism. J. Korea Design Forum 24, 55 (2019)
Riegelsberger, J., Sasse, M.A., Mccarthy, J.D.: The mechanics of trust: a framework for research and design. Int. J. Hum.-Comput. Stud. 62(3), 381–422 (2005)
Sah, Y.J., Yoo, B., Shyam Sundar, S.: Are specialist robots better than generalist robots?. In: Acm/ieee International Conference on Human-robot Interaction. IEEE, (2012).
Sauer B.: Voice guidelines. Clearleft. https://voiceguidelines.clearleft.com/. (2017)
Weizenbaum, J.: ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9(1), 36–45 (1983)
Wilks Y.: Is a companion a distinctive kind of relationship with a machine. In: Workshop on companionable dialogue systems. Associatison for computational linguistics. (2010).
Xiaoli, He., Zhenhong, W., Kejing, W.: The clue effect of positive emotions on interpersonal trust. Bull. Psychol. 43(012), 1408–1417 (2011)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
None to be declared.
Rights and permissions
About this article
Cite this article
Guo, Y., Wang, J., Wu, R. et al. Designing for trust: a set of design principles to increase trust in chatbot. CCF Trans. Pervasive Comp. Interact. 4, 474–481 (2022). https://doi.org/10.1007/s42486-022-00106-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42486-022-00106-5