Skip to main content
Log in

Designing for trust: a set of design principles to increase trust in chatbot

  • Regular Paper
  • Published:
CCF Transactions on Pervasive Computing and Interaction Aims and scope Submit manuscript

Abstract

Trust is an important factor influencing user acceptance of high-tech products. As the artificial intelligence and natural language processing develop, all kinds of conversational agents (chatbot) have appeared around us. These chatbots are able to provide people with convenient services such as ordering food, stock recommendations, fund diagnostics. However, it is still not clear how to make users feel chatbot trustworthy. In this study, we aimed to explore a set of design principles to build trust between users and conversational agents. Based on extensive research on trust, we proposed five design semantics and 10 design principles, and verified their effectiveness through experiments. The result of experiment suggest that our design principles can improve users’ trust towards chatbot, thus provided guidance and suggestions for designing more trustworthy chatbots in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Botsman, R.: The battle for trust: institutions versus strangers: who can you trust? How technology brought us together and why it could drive us apart. Church Commun. Culture 3(2), 189–191 (2018)

    Article  Google Scholar 

  • Candello, H., Pinhanez, C., Figueiredo, F. (2017) Typefaces and the perception of humanness in natural language chatbots. In: Proceedings of the 2017 chi conference on human factors in computing systems. pp. 3476–3487.

  • Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Cowan, B.R. (2019) What makes a good conversation? Challenges in designing truly conversational agents. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. pp. 1–12.

  • Corti, K., Gillespie, A.: Co-constructing intersubjectivity with artificial conversational agents: people are more likely to initiate repairs of misunderstandings with agents represented as human. Comput. Human Behav. 58, 431–442 (2016)

    Article  Google Scholar 

  • Dumouchel, P., Damiano, L.: Living with robots. Harvard University Press, Cambridge (2017)

    Book  Google Scholar 

  • Følstad, A., Nordheim, C.B., Bjørkli, C.A.: What makes users trust a chatbot for customer service? An exploratory interview study. In: International Conference on internet science, pp. 194–208. Springer, Cham (2018)

    Chapter  Google Scholar 

  • Hill, J., Ford, W.R., Farreras, I.G.: Real conversations with artificial intelligence: a comparison between human–human online conversations and human–chatbot conversations. Comput. Human Behav. 49, 245–250 (2015)

    Article  Google Scholar 

  • Lankton, N., Harrison McKnight, D., Tripp, J.: Technology, humanness, and trust: rethinking trust in technology. J. Assoc. Inf. Syst. 16(10), 880–918 (2015)

    Google Scholar 

  • Mayer, R.C., Davis, J.H., David Schoorman, F.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  • Mcallister, D.J.: Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 38(1), 24–59 (1995)

    Article  MathSciNet  Google Scholar 

  • Mcknight, D.H., Choudhury, V., Kacmar, C.: Developing and Validating Trust Measures For E-Commerce: An Integrative Typology. Inf. Syst. Res. 13(3), 344–359 (2002)

    Article  Google Scholar 

  • Mcknight, Dh., Carter, M., Thatcher, Jb., Clay, Pf.: Trust in a specific technology: an investigation in its components and measures. ACM Trans. Manag. Inf. Syst. 2(2), 1–12 (2011)

    Article  Google Scholar 

  • Mone, G.: The edge of the uncanny. Commun. ACM 59, 17–19 (2016)

    Article  Google Scholar 

  • Nordheim, C.B., Følstad, A., Bjørkli, C.A.: An initial model of trust in chatbots for customer service—findings from a questionnaire study. Interact. Comput. 31(3), 317–335 (2019)

    Article  Google Scholar 

  • Park, S., Lee, S.: A study on the effect of intimacy between conversational agents and users on reliability—focused on self exposure, small talk and anthropomorphism. J. Korea Design Forum 24, 55 (2019)

    Google Scholar 

  • Riegelsberger, J., Sasse, M.A., Mccarthy, J.D.: The mechanics of trust: a framework for research and design. Int. J. Hum.-Comput. Stud. 62(3), 381–422 (2005)

    Article  Google Scholar 

  • Sah, Y.J., Yoo, B., Shyam Sundar, S.: Are specialist robots better than generalist robots?. In: Acm/ieee International Conference on Human-robot Interaction. IEEE, (2012).

  • Sauer B.: Voice guidelines. Clearleft. https://voiceguidelines.clearleft.com/. (2017)

  • Weizenbaum, J.: ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9(1), 36–45 (1983)

    Article  Google Scholar 

  • Wilks Y.: Is a companion a distinctive kind of relationship with a machine. In: Workshop on companionable dialogue systems. Associatison for computational linguistics. (2010).

  • Xiaoli, He., Zhenhong, W., Kejing, W.: The clue effect of positive emotions on interpersonal trust. Bull. Psychol. 43(012), 1408–1417 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunsan Guo.

Ethics declarations

Conflict of interest

None to be declared.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, Y., Wang, J., Wu, R. et al. Designing for trust: a set of design principles to increase trust in chatbot. CCF Trans. Pervasive Comp. Interact. 4, 474–481 (2022). https://doi.org/10.1007/s42486-022-00106-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42486-022-00106-5

Keywords

Navigation