Skip to main content

Advertisement

Log in

Trust and ethics in AI

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization of trust with dispositional, institutional, and experiential trust each significantly correlated with trust dimensions. Along with trust in AI, we examine perceptions of the importance of seven ethics requirements of AI offered by the European Commission’s High-Level Expert Group. Then the association between ethics requirements and trust is evaluated through regression analysis. Findings suggest that the ethical requirement of societal and environmental well-being is positively associated with human-like trust in AI. Accountability and technical robustness are two other ethical requirements, which are significantly associated with functionality trust in AI. Further, trust in AI was observed to be higher than trust in other institutions. Drawing from our findings, we offer a multidimensional framework of trust that is inspired by ethical values to ensure the acceptance of AI as a trustworthy technology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The dataset analyzed in the current study is available from the corresponding author on request.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyesun Choung.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest. This project was funded in part by a National Association of Broadcasters Pilot Grant.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Survey questionnaires, scales, and reliability coefficients

Appendix: Survey questionnaires, scales, and reliability coefficients

Variable

Survey items

Scale

Reliability

Trust propensity

I usually trust people until they give me a reason not to trust them

1 (strongly disagree) – 5 (strongly agree)

\(\alpha\) = 0.85

I generally give people the benefit of the doubt when I first meet them

My typical approach is to trust new acquaintances until they prove I should not trust them

Trust in intuitions

To what extent do you trust the following institutions? [Federal government]

1 (do not trust) – 5 (highly trust)

\(\alpha\) = 0.90

To what extent do you trust the following institutions? [Corporations]

To what extent do you trust the following institutions? [Big technology companies]

Familiarity with AI technologies

Here are some examples of smart technology that we encounter every day, which uses AI. How often do you use these technologies? [Smart home devices (e.g., Google Nest, Ring, Blink)]

1 (never) – 5 (very frequently)

\(\alpha\) = 0.88

How often do you use these technologies? [Smart speakers (e.g., Amazon Echo, Google Home, Apple Homepod, Sonos)]

How often do you use these technologies? [Virtual assistants (e.g., Siri, Alexa, Cortana)]

How often do you use these technologies? [Wearable devices (e.g., Fitbit, Apple Watch)]

Importance of ethics principles

How important are these values in the design of AI and smart technologies that interact with us? [Privacy and data governance: Competent authorities who implement legal frameworks and guidelines for testing and certification of AI-enabled products and services.]

1 (not at all important) – 5 (extremely important agree)

 

How important are these values in the design of AI and smart technologies that interact with us? [Human agency and oversight: Human oversight and control throughout the lifecycle of AI products.]

How important are these values in the design of AI and smart technologies that interact with us? [Technical robustness and safety: Systems are developed in a responsible manner with proper consideration of risks.]

How important are these values in the design of AI and smart technologies that interact with us? [Transparency: Transparency requirements that reduce the opacity of systems.]

How important are these values in the design of AI and smart technologies that interact with us? [Diversity, non-discrimination and fairness: The application of rules designed to protect fundamental human rights, such as equality.]

How important are these values in the design of AI and smart technologies that interact with us? [Societal and environmental well-being: AI systems that conform to the best standards of sustainability and address like issues climate change and environmental justice.]

How important are these values in the design of AI and smart technologies that interact with us? [Accountability: AI at any step is accountable for considering the system’s impact in the world.]

Human-like trust in AI

Smart technologies care about our well-being. (Benevolence)

1 (strongly disagree) – 5 (strongly agree)

\(\alpha\) = 0.92

Smart technologies are sincerely concerned about addressing the problems of human users. (Benevolence)

Smart technologies try to be helpful and do not operate out of selfish interest. (Benevolence)

Smart technologies are truthful in their dealings. (Integrity)

Smart technologies keep their commitments and deliver on their promises. (Integrity)

Smart technologies are honest and do not abuse the information and advantage they have over their users. (Integrity)

Functionality trust in AI

Smart technologies work well. (Competence)

1 (strongly disagree) – 5 (strongly agree)

\(\alpha\) = 0.91

Smart technologies have the features necessary to complete key tasks. (Competence)

Smart technologies are competent in their area of expertise. (Competence)

Smart technologies are reliable. (Competence)

Smart technologies are dependable. (Competence)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choung, H., David, P. & Ross, A. Trust and ethics in AI. AI & Soc 38, 733–745 (2023). https://doi.org/10.1007/s00146-022-01473-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-022-01473-4

Keywords

Navigation