Abstract
With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization of trust with dispositional, institutional, and experiential trust each significantly correlated with trust dimensions. Along with trust in AI, we examine perceptions of the importance of seven ethics requirements of AI offered by the European Commission’s High-Level Expert Group. Then the association between ethics requirements and trust is evaluated through regression analysis. Findings suggest that the ethical requirement of societal and environmental well-being is positively associated with human-like trust in AI. Accountability and technical robustness are two other ethical requirements, which are significantly associated with functionality trust in AI. Further, trust in AI was observed to be higher than trust in other institutions. Drawing from our findings, we offer a multidimensional framework of trust that is inspired by ethical values to ensure the acceptance of AI as a trustworthy technology.
Similar content being viewed by others
Data availability
The dataset analyzed in the current study is available from the corresponding author on request.
References
Abney K (2012) Robotics, ethical theory, and metaethics: A guide for the perplexed. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics, First MIT Press, paperback. The MIT Press, Cambridge, Massachusetts London, England, pp 35–54
Alarcon GM, Lyons JB, Christensen JC et al (2018) The effect of propensity to trust and perceptions of trustworthiness on trust behaviors in dyads. Behav Res Methods 50:1906–1920. https://doi.org/10.3758/s13428-017-0959-6
Allen C, Wallach W (2012) Moral machines: Contradiction in terms of abdication of human responsibility? In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics, First MIT Press, paperback. The MIT Press, Cambridge, Massachusetts London, England, pp 55–68
Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc 35:611–623. https://doi.org/10.1007/s00146-019-00931-w
Arogyaswamy B (2020) Big tech and societal sustainability: an ethical framework. AI Soc 35:829–840. https://doi.org/10.1007/s00146-020-00956-6
Borgesius FJ (2018) Discrimination, artificial intelligence, and algorithmic. Directorate General of Democracy, Council of Europe, Strasbourg
Burton JW, Stein M-K, Jensen TB (2020) A systematic review of algorithm aversion in augmented decision making. J Behav Decis Mak 33:220–239. https://doi.org/10.1002/bdm.2155
Calhoun CS, Bobko P, Gallimore JJ, Lyons JB (2019) Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. J Trust Res 9:28–46. https://doi.org/10.1080/21515581.2019.1579730
Chatila R, Havens JC (2019) The IEEE global initiative on ethics of autonomous and intelligent systems. In: Aldinhas Ferreira MI, Silva Sequeira J, Singh Virk G et al (eds) Robotics and Well-Being. Springer International Publishing, Cham, pp 11–16
Chen SC, Dhillon GS (2003) Interpreting Dimensions of Consumer Trust in E-Commerce. Inf Technol Manag 4:303–318
Choung H, David P, Ross A (2022) Trust in AI and its role in the acceptance of AI technologies. Int J Hum-Comput Interact. https://doi.org/10.1080/10447318.2022.2050543
Colquitt JA, Scott BA, LePine JA (2007) Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance. J Appl Psychol 92:909–927. https://doi.org/10.1037/0021-9010.92.4.909
Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144:114–126. https://doi.org/10.1037/xge0000033
Edelman (2021) Edelman trust barometer 2021
Epstein Z, Payne BH, Shen JH, et al (2018) TuringBox: An experimental platform for the evaluation of AI systems. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden, pp 5826–5828
Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harv Data Sci Rev. https://doi.org/10.1162/99608f92.8cd550d1
Floridi L, Cowls J, Beltrametti M et al (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach 28:689–707. https://doi.org/10.1007/s11023-018-9482-5
Frazier ML, Johnson PD, Fainshmidt S (2013) Development and validation of a propensity to trust scale. J Trust Res 3:76–97. https://doi.org/10.1080/21515581.2013.820026
Fulmer A, Dirks K (2018) Multilevel trust: a theoretical and practical imperative. J Trust Res 8:137–141. https://doi.org/10.1080/21515581.2018.1531657
Gefen D (2000) E-commerce: the role of familiarity and trust. Omega 28:725–737. https://doi.org/10.1016/S0305-0483(00)00021-9
Gefen D, Karahanna E, Straub DW (2003) Trust and TAM in online shopping: an integrated model. MIS Q 27:51–90. https://doi.org/10.2307/30036519
Gillath O, Ai T, Branicky MS, et al (2021) Attachment and trust in artificial intelligence. Comput Hum Behav 10
Gulati R (1995) Does familiarity breed trust? The implications of repeated ties for contractual choice in alliances. Acad Manag J 38:85–112. https://doi.org/10.2307/256729
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Minds Mach 30:99–120. https://doi.org/10.1007/s11023-020-09517-8
Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Fact 53:517–527. https://doi.org/10.1177/0018720811417254
Helberger N, Araujo T, de Vreese CH (2020) Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Comput Law Secur Rev 39:105456. https://doi.org/10.1016/j.clsr.2020.105456
Hleg AI (2019) Ethics guidelines for trustworthy AI. European Commission, Brussels
Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors J Hum Factors Ergon Soc 57:407–434. https://doi.org/10.1177/0018720814547570
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399. https://doi.org/10.1038/s42256-019-0088-2
Lankton N, McKnight DH, Tripp J (2015) Technology, humanness, and trust: Rethinking trust in technology. J Assoc Inf Syst 16:880–918. https://doi.org/10.17705/1jais.00411
Lee MK (2018) Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc 5:205395171875668. https://doi.org/10.1177/2053951718756684
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: People prefer algorithmic to human judgment. Org Behav Hum Decis Process 151:90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Madhavan P, Wiegmann DA (2007) Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Hum Fact J Hum Fact Ergon Soc 49:773–785. https://doi.org/10.1518/001872007X230154
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust: past, present, and future. Acad Manage Rev 20:709–734
Mcknight DH, Carter M, Thatcher JB, Clay PF (2011) Trust in a specific technology: an investigation of its components and measures. ACM Trans Manag Inf Syst 2:1–25. https://doi.org/10.1145/1985347.1985353
Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507. https://doi.org/10.1038/s42256-019-0114-4
Mökander J, Axente M (2021) Ethics-based auditing of automated decision-making systems: intervention points and policy implications. AI Soc. https://doi.org/10.1007/s00146-021-01286-x
Mökander J, Floridi L (2021) Ethics-based auditing to develop trustworthy AI. Minds Mach 31:323–327. https://doi.org/10.1007/s11023-021-09557-8
Mökander J, Morley J, Taddeo M, Floridi L (2021) Ethics-based auditing of automated decision-making systems: nature, scope, and limitations. Sci Eng Ethics 27:44. https://doi.org/10.1007/s11948-021-00319-4
OECD (2019) Artificial intelligence in society. OECD Publishing, Paris
Roski J, Maier EJ, Vigilante K et al (2021) Enhancing trust in AI through industry self-governance. J Am Med Inform Assoc 28:1582–1590. https://doi.org/10.1093/jamia/ocab065
Rotenberg KJ (2019) The psychology of interpersonal trust: theory and research. Routledge, Abingdon, Oxon, New York
Rousseau DM, Sitkin SB, Burt RS, Camerer C (1998) Not so different after all: a cross-discipline view of trust. Acad Manag Rev 23:393–404. https://doi.org/10.5465/amr.1998.926617
Schoorman FD, Mayer RC, Davis JH (2007) An integrative model of organizational trust: Past, present, and future. Acad Manag Rev 32:344–354. https://doi.org/10.5465/amr.2007.24348410
Shin D (2021) The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int J Hum-Comput Stud 146:102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284. https://doi.org/10.1016/j.chb.2019.04.019
Simonite T (2021) What Really Happened When Google Ousted Timnit Gebru. Wired
Sundar SS, Kim J (2019) Machine heuristic: when we trust computers more than humans with our personal information. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19. ACM Press, Glasgow, Scotland Uk, pp 1–9
Thiebes S, Lins S, Sunyaev A (2021) Trustworthy Artificial Intelligence Electron Mark 31:447–464. https://doi.org/10.1007/s12525-020-00441-4
Torresen J (2018) A review of future and ethical perspectives of robotics and AI. Front Robot AI 4:75. https://doi.org/10.3389/frobt.2017.00075
Wu K, Zhao Y, Zhu Q et al (2011) A meta-analysis of the impact of trust on technology acceptance model: investigation of moderating influence of subject and context type. Int J Inf Manag 31:572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interest. This project was funded in part by a National Association of Broadcasters Pilot Grant.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Survey questionnaires, scales, and reliability coefficients
Appendix: Survey questionnaires, scales, and reliability coefficients
Variable | Survey items | Scale | Reliability |
---|---|---|---|
Trust propensity | I usually trust people until they give me a reason not to trust them | 1 (strongly disagree) – 5 (strongly agree) | \(\alpha\) = 0.85 |
I generally give people the benefit of the doubt when I first meet them | |||
My typical approach is to trust new acquaintances until they prove I should not trust them | |||
Trust in intuitions | To what extent do you trust the following institutions? [Federal government] | 1 (do not trust) – 5 (highly trust) | \(\alpha\) = 0.90 |
To what extent do you trust the following institutions? [Corporations] | |||
To what extent do you trust the following institutions? [Big technology companies] | |||
Familiarity with AI technologies | Here are some examples of smart technology that we encounter every day, which uses AI. How often do you use these technologies? [Smart home devices (e.g., Google Nest, Ring, Blink)] | 1 (never) – 5 (very frequently) | \(\alpha\) = 0.88 |
How often do you use these technologies? [Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod, Sonos)] | |||
How often do you use these technologies? [Virtual assistants (e.g., Siri, Alexa, Cortana)] | |||
How often do you use these technologies? [Wearable devices (e.g., Fitbit, Apple Watch)] | |||
Importance of ethics principles | How important are these values in the design of AI and smart technologies that interact with us? [Privacy and data governance: Competent authorities who implement legal frameworks and guidelines for testing and certification of AI-enabled products and services.] | 1 (not at all important) – 5 (extremely important agree) | |
How important are these values in the design of AI and smart technologies that interact with us? [Human agency and oversight: Human oversight and control throughout the lifecycle of AI products.] | |||
How important are these values in the design of AI and smart technologies that interact with us? [Technical robustness and safety: Systems are developed in a responsible manner with proper consideration of risks.] | |||
How important are these values in the design of AI and smart technologies that interact with us? [Transparency: Transparency requirements that reduce the opacity of systems.] | |||
How important are these values in the design of AI and smart technologies that interact with us? [Diversity, non-discrimination and fairness: The application of rules designed to protect fundamental human rights, such as equality.] | |||
How important are these values in the design of AI and smart technologies that interact with us? [Societal and environmental well-being: AI systems that conform to the best standards of sustainability and address like issues climate change and environmental justice.] | |||
How important are these values in the design of AI and smart technologies that interact with us? [Accountability: AI at any step is accountable for considering the system’s impact in the world.] | |||
Human-like trust in AI | Smart technologies care about our well-being. (Benevolence) | 1 (strongly disagree) – 5 (strongly agree) | \(\alpha\) = 0.92 |
Smart technologies are sincerely concerned about addressing the problems of human users. (Benevolence) | |||
Smart technologies try to be helpful and do not operate out of selfish interest. (Benevolence) | |||
Smart technologies are truthful in their dealings. (Integrity) | |||
Smart technologies keep their commitments and deliver on their promises. (Integrity) | |||
Smart technologies are honest and do not abuse the information and advantage they have over their users. (Integrity) | |||
Functionality trust in AI | Smart technologies work well. (Competence) | 1 (strongly disagree) – 5 (strongly agree) | \(\alpha\) = 0.91 |
Smart technologies have the features necessary to complete key tasks. (Competence) | |||
Smart technologies are competent in their area of expertise. (Competence) | |||
Smart technologies are reliable. (Competence) | |||
Smart technologies are dependable. (Competence) |
Rights and permissions
About this article
Cite this article
Choung, H., David, P. & Ross, A. Trust and ethics in AI. AI & Soc 38, 733–745 (2023). https://doi.org/10.1007/s00146-022-01473-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01473-4