Skip to main content

The Importance of Distrust in AI

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

In recent years the use of Artificial Intelligence (AI) has become increasingly prevalent in a growing number of fields. As AI systems are being adopted in more high-stakes areas such as medicine and finance, ensuring that they are trustworthy is of increasing importance. A concern that is prominently addressed by the development and application of explainability methods, which are purported to increase trust from its users and wider society. While an increase in trust may be desirable, an analysis of literature from different research fields shows that an exclusive focus on increasing trust may not be warranted. Something which is well exemplified by the recent development in AI chatbots, which while highly coherent tend to make up facts. In this contribution, we investigate the concepts of trust, trustworthiness, and user reliance.

In order to foster appropriate reliance on AI we need to prevent both disuse of these systems as well as overtrust. From our analysis of research on interpersonal trust, trust in automation, and trust in (X)AI, we identify the potential merit of the distinction between trust and distrust (in AI). We propose that alongside trust a healthy amount of distrust is of additional value for mitigating disuse and overtrust. We argue that by considering and evaluating both trust and distrust, we can ensure that users can rely appropriately on trustworthy AI, which can both be useful as well as fallible.

We gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for grant TRR 318/1 2021 - 438445824.

T. M. Peters and R. W. Visser—These authors contributed equally to this work and share first authorship.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Mistrust is used as a synonym for distrust in this paper.

References

  1. Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI (2020). https://www.sciencedirect.com/science/article/abs/pii/S1566253519308103

  2. Bansal, G., et al.: Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, New York (2021). https://doi.org/10.1145/3411764.3445717

  3. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012. ISSN 1566-2535

    Article  Google Scholar 

  4. Benamati, J., Serva, M.A., Fuller, M.A.: Are trust and distrust distinct constructs? An empirical study of the effects of trust and distrust among online banking users. In: Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS 2006). IEEE (2006). https://doi.org/10.1109/hicss.2006.63

  5. Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: 2015 International Conference on Healthcare Informatics, pp. 160–169 (2015). https://doi.org/10.1109/ICHI.2015.26

  6. Chien, S.Y., Lewis, M., Hergeth, S., Semnani-Azad, Z., Sycara, K.: Cross-country validation of a cultural scale in measuring trust in automation. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59, no. 1, pp. 686–690 (2015). https://doi.org/10.1177/1541931215591149

  7. Chien, S.Y., Lewis, M., Sycara, K.: Influence of cultural factors in dynamic trust in automation. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016, 9–12 October 2016, Budapest, Hungary. IEEE (2016). https://ieeexplore.ieee.org/abstract/document/7844677

  8. Cho, J.: The mechanism of trust and distrust formation and their relational outcomes. J. Retail. 82(1), 25–35 (2006). https://doi.org/10.1016/j.jretai.2005.11.002. ISSN 0022-4359

    Article  Google Scholar 

  9. Ehsan, U., Riedl, M.O.: Human-centered explainable AI: towards a reflective sociotechnical approach. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds.) HCII 2020. LNCS, vol. 12424, pp. 449–466. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60117-1_33

    Chapter  Google Scholar 

  10. Fang, H., Guo, G., Zhang, J.: Multi-faceted trust and distrust prediction for recommender systems. Decis. Support Syst. 71, 37–47 (2015). https://doi.org/10.1016/j.dss.2015.01.005. ISSN 0167-9236

    Article  Google Scholar 

  11. Fein, S.: Effects of suspicion on attributional thinking and the correspondence bias. J. Pers. Soc. Psychol. 70(6), 1164–1184 (1996). https://doi.org/10.1037/0022-3514.70.6.1164

    Article  Google Scholar 

  12. Ferrario, A., Loi, M.: How explainability contributes to trust in AI. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1457–1466 (2022)

    Google Scholar 

  13. Frison, A.K., et al.: In UX we trust. In: Brewster, S. (ed.) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13. ACM Digital Library, Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3290605.3300374. ISBN 9781450359702

  14. Gaube, S., et al.: Do as AI say: susceptibility in deployment of clinical decision-aids. NPJ Digi. Med. 4(1), 31 (2021)

    Article  Google Scholar 

  15. Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020). https://doi.org/10.5465/annals.2018.0057

    Article  Google Scholar 

  16. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2019). https://doi.org/10.1145/3236009

    Article  Google Scholar 

  17. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  18. Guo, S.L., Lumineau, F., Lewicki, R.J.: Revisiting the foundations of organizational distrust. Found. Trends Manage. 1(1), 1–88 (2017). https://doi.org/10.1561/3400000001. ISSN 2475-6946

  19. Harrison McKnight, D., Chervany, N.L.: Trust and distrust definitions: one bite at a time. In: Falcone, R., Singh, M., Tan, Y.-H. (eds.) Trust in Cyber-societies. LNCS (LNAI), vol. 2246, pp. 27–54. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45547-7_3

    Chapter  MATH  Google Scholar 

  20. HLEG, A.: Ethics guidelines for trustworthy AI (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  21. Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015). https://doi.org/10.1177/0018720814547570

    Article  Google Scholar 

  22. Hoffman, R., Mueller, S.T., Klein, G., Litman, J.: Measuring trust in the XAI context. Technical report, DARPA Explainable AI Program (2018). https://doi.org/10.31234/osf.io/e3kv9

  23. Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635, ACM Digital Library, Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3442188.3445923. ISBN 9781450383097

  24. Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12) (2023). https://doi.org/10.1145/3571730. ISSN 0360-0300

  25. Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000). https://doi.org/10.1207/S15327566IJCE0401_04

    Article  Google Scholar 

  26. Jiang, J., Kahai, S., Yang, M.: Who needs explanation and when? juggling explainable AI and user epistemic uncertainty. Int. J. Hum.-Comput. Stud. 165, 102839 (2022). https://doi.org/10.1016/j.ijhcs.2022.102839. ISSN 1071-5819

    Article  Google Scholar 

  27. Kaplan, A.D., Kessler, T.T., Brill, J.C., Hancock, P.: Trust in artificial intelligence: meta-analytic findings. Hum. Factors 65(2), 337–359 (2023)

    Article  Google Scholar 

  28. Kastner, L., Langer, M., Lazar, V., Schomacker, A., Speith, T., Sterz, S.: On the relation of trust and explainability: why to engineer for trustworthiness. In: Proceedings, 29th IEEE International Requirements Engineering Conference Workshops: REW 2021: 20–24 September 2021, Online Event, pp. 169–175, IEEE Computer Society, Conference Publishing Services, Los Alamitos (2021). https://doi.org/10.1109/REW53955.2021.00031. ISBN 978-1-6654-1898-0

  29. Kohn, S.C., de Visser, E.J., Wiese, E., Lee, Y.C., Shaw, T.H.: Measurement of trust in automation: a narrative review and reference guide. Front. Psychol. 12, 604977 (2021). https://doi.org/10.3389/fpsyg.2021.604977. ISSN 1664-1078

    Article  Google Scholar 

  30. Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)?-a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)

    Google Scholar 

  31. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Hum.-Comput. Stud. 40(1), 153–184 (1994). https://doi.org/10.1006/ijhc.1994.1007. ISSN 1071-5819

    Article  Google Scholar 

  32. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004). https://journals.sagepub.com/doi/abs/10.1518/hfes.46.1.50_30392

  33. Lewicki, R.J., McAllister, D.J., Bies, R.J.: Trust and distrust: new relationships and realities. Acad. Manage. Rev. 23(3), 438–458 (1998). https://doi.org/10.5465/amr.1998.926620. ISSN 0363-7425

    Article  Google Scholar 

  34. Lewis, J.D., Weigert, A.: Trust as a social reality. Soc. Forces 63(4), 967–985 (1985)

    Article  Google Scholar 

  35. Luhmann, N.: Vertrauen: ein Mechanismus der Reduktion sozialer Komplexität. UTB: 2185, Stuttgart: Lucius & Lucius, 4. aufl., nachdr. edn. (2009). ISBN 9783825221850

    Google Scholar 

  36. Mayer, J., Mussweiler, T.: Suspicious spirits, flexible minds: when distrust enhances creativity. J. Pers. Soc. Psychol. 101(6), 1262–1277 (2011). https://doi.org/10.1037/a0024407. ISSN 1939-1315

    Article  Google Scholar 

  37. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manage. Rev. 20(3), 709–734 (1995). https://doi.org/10.5465/amr.1995.9508080335. ISSN 0363-7425

    Article  Google Scholar 

  38. Mayo, R.: Cognition is a matter of trust: Distrust tunes cognitive processes. Eur. Rev. Soc. Psychol. 26(1), 283–327 (2015). https://doi.org/10.1080/10463283.2015.1117249

    Article  Google Scholar 

  39. McBride, M., Morgan, S.: Trust calibration for automated decision aids. Institute for Homeland Security Solutions, pp. 1–11 (2010)

    Google Scholar 

  40. McGuirl, J.M., Sarter, N.B.: Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum. Factors 48(4), 656–665 (2006)

    Article  Google Scholar 

  41. McKnight, D.H., Kacmar, C.J., Choudhury, V.: Dispositional trust and distrust distinctions in predicting high- and low-risk internet expert advice site perceptions. e-Serv. J. 3(2), 35 (2004). https://doi.org/10.2979/esj.2004.3.2.35. ISSN 1528-8226

  42. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11(3–4) (2021). https://doi.org/10.1145/3387166. ISSN 2160-6455

  43. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Article  Google Scholar 

  44. Ou, C.X., Sia, C.L.: Consumer trust and distrust: an issue of website design. Int. J. Hum.-Comput. Stud. 68(12), 913–934 (2010). https://doi.org/10.1016/j.ijhcs.2010.08.003. ISSN 1071-5819

    Article  Google Scholar 

  45. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997). https://doi.org/10.1518/001872097778543886

    Article  Google Scholar 

  46. Poortinga, W., Pidgeon, N.F.: Exploring the dimensionality of trust in risk regulation. Risk Anal. Off. Publ. Soc. Risk Anal. 23(5), 961–972 (2003). https://doi.org/10.1111/1539-6924.00373

    Article  Google Scholar 

  47. Posten, A.C., Gino, F.: How trust and distrust shape perception and memory. J. Pers. Soc. Psychol. 121(1), 43–58 (2021). https://doi.org/10.1037/pspa0000269. ISSN 1939-1315

    Article  Google Scholar 

  48. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  49. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  50. Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021). https://doi.org/10.1109/JPROC.2021.3060483

    Article  Google Scholar 

  51. Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation. Hum. Factors J. Hum. Factors Ergon. Soc. 58(3), 377–400 (2016). https://doi.org/10.1177/0018720816634228

    Article  Google Scholar 

  52. Schoorman, F.D., Mayer, R.C., Davis, J.H.: An integrative model of organizational trust: past, present, and future. Acad. Manage. Rev. 32(2), 344–354 (2007). https://doi.org/10.5465/amr.2007.24348410. ISSN 0363-7425

    Article  Google Scholar 

  53. Schweer, M., Vaske, C., Vaske, A.K.: Zur Funktionalität und Dysfunktionalität von Misstrauen in virtuellen Organisationen (2009). https://dl.gi.de/handle/20.500.12116/35191

  54. Seckler, M., Heinz, S., Forde, S., Tuch, A.N., Opwis, K.: Trust and distrust on the web: user experiences and website characteristics. Comput. Hum. Behav. 45, 39–50 (2015). https://doi.org/10.1016/j.chb.2014.11.064. ISSN 0747-5632

    Article  Google Scholar 

  55. Spain, R.D., Bustamante, E.A., Bliss, J.P.: Towards an empirically developed scale for system trust: take two. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 52, no. 19, pp. 1335–1339 (2008). https://doi.org/10.1177/154193120805201907

  56. Stanton, B., Jensen, T.: Trust and artificial intelligence (2021). https://doi.org/10.6028/nist.ir.8332-draft

  57. Thiebes, S., Lins, S., Sunyaev, A.: Trustworthy artificial intelligence. Electron. Mark. 31(2), 447–464 (2021). https://doi.org/10.1007/s12525-020-00441-4. ISSN 1422-8890

    Article  Google Scholar 

  58. Thielsch, M.T., Meeßen, S.M., Hertel, G.: Trust and distrust in information systems at the workplace. PeerJ 6, e5483 (2018). https://doi.org/10.7717/peerj.5483. ISSN 2167-8359

    Article  Google Scholar 

  59. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., Van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020)

    Google Scholar 

  60. Vaske, C.: Misstrauen und Vertrauen. Universität Vechta (2016)

    Google Scholar 

  61. de Visser, E.J., et al.: Towards a theory of longitudinal trust calibration in human-robot teams. Int. J. Soc. Robot. 12(2), 459–478 (2020). https://doi.org/10.1007/s12369-019-00596-x. ISSN 1875-4805

    Article  Google Scholar 

  62. Wang, X., Yin, M.: Effects of explanations in AI-assisted decision making: principles and comparisons. ACM Trans. Interact. Intell. Syst. (2022). https://doi.org/10.1145/3519266

  63. Zhang, Y., Liao, Q.V., Bellamy, R.K.E.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM (2020). https://doi.org/10.1145/3351095.3372852

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Tobias M. Peters or Roel W. Visser .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peters, T.M., Visser, R.W. (2023). The Importance of Distrust in AI. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1903. Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44070-0_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44069-4

  • Online ISBN: 978-3-031-44070-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics