skip to main content
10.1145/3686038.3686058acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
research-article
Open access

Measurable Trust: The Key to Unlocking User Confidence in Black-Box AI

Published: 16 September 2024 Publication History

Abstract

Given the pervasive integration of artificial intelligence (AI) into our daily lives, establishing public trust is paramount for maximizing AI's benefits and ensuring its responsible use. This research proposes an investigation into the feasibility of developing a globally accepted, context-specific "trustworthiness score" for AI systems. We recognize that trust is a dynamic construct influenced by individual experiences, situational factors, and inherent user characteristics. We hypothesize that by quantifying behavioral manifestations of trust, such as user acceptance and confidence level during interactions, as well as incorporating expert assessments and ethical considerations, we can indirectly measure AI trustworthiness. This approach aims to create a standardized framework that can guide responsible AI development, mitigate potential risks, and empower users to make informed decisions about trusting AI systems. The proposed research is particularly relevant in high-stakes sectors like healthcare and finance where AI decisions can significantly impact individuals and society, underscoring the need for transparency, accountability, and robust mechanisms to evaluate and build trust in AI technologies.

References

[1]
Alvarado, R.: What kind of trust does AI deserve, if any. AI and Ethics 3(4), 1169-1183 (2023)
[2]
Ananny, M., Crawford, K.: Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20(3), 973-989 (2018)
[3]
Barocas, S., Selbst, A. D.: Big data's disparate impact. California Law Review 104, 671 (2016)
[4]
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Amodei, D.: The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802, 07228 (2018)
[5]
Brynjolfsson, E., McAfee, A.: The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company (2014)
[6]
Castelo, N., Bos, M. W., Lehmann, D. R.: Task-dependent algorithm aversion. Journal of Marketing Research 56(5), 809-825 (2019)
[7]
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L.: Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and engineering ethics 24(2), 505-528 (2018)
[8]
Cave, S., Dihal, K.: The whiteness of AI. Philosophy & Technology 33(4), 685-703 (2020)
[9]
Cavoukian, A.: Privacy by design: The 7 foundational principles. Information and privacy commissioner of Ontario, Canada 5, 12 (2009)
[10]
Choubisa, V., Choubisa, D.: Towards trustworthy AI: An analysis of the relationship between explainability and trust in AI systems. International Journal of Science and Research Archive 11(1), 2219-2226 (2024)
[11]
Davenport, T. H.: Competing on analytics. Harvard business review 84(1), 98 (2006)
[12]
De Jong, B. A., Dirks, K. T., Gillespie, N.: Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of applied psychology 101(8), 1134 (2016)
[13]
Dietvorst, B. J., Simmons, J. P., Massey, C.: Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144(1), 114 (2015)
[14]
Dignum, V.: Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 1). Cham: Springer (2019)
[15]
Dong, Y., Mu, R., Jin, G., Qi, Y., Hu, J., Zhao, X., Huang, X.: Building Guardrails for Large Language Models. arXiv preprint arXiv:2402 01822, (2024)
[16]
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
[17]
Evans, T., Retzlaff, C. O., Geißler, C., Kargl, M., Plass, M., Müller, H., Holzinger, A. The explainability paradox: Challenges for xAI in digital pathology. Future Generation Computer Systems 133, 281-296 (2022)
[18]
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Vayena, E.: AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines 28, 689-707 (2018)
[19]
Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Science Review 1(1), (2019)
[20]
Frewer, L.: (1999). Risk perception, social trust, and public participation in strategic decision making: Implications for emerging technologies. Ambio 569-574 (2018)
[21]
Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14(3), 330-347 (1996)
[22]
Gambetta, D.: Can we trust trust. Trust: Making and breaking cooperative relations, electronic edition, Department of Sociology., University of Oxford 213-237 (2000)
[23]
Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. 
[24]
Gillespie, N., Lockey, S., Curtis, C., Pool, J., Akbari, A.: Trust in artificial intelligence: A global study. The University of Queensland & KPMG Australia: Brisbane, Australia (2023)
[25]
Gillespie, T.: Why explainable AI won't deliver explainable AI, or trust. AI & SOCIETY (2023)
[26]
Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., Zwitter, A.: Will democracy survive big data and artificial intelligence. Towards digital enlightenment: Essays on the dark and light sides of the digital revolution 73-98 (2019)
[27]
Hoff, K. A., Bashir, M.: Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57(3), 407-434 (2015)
[28]
Hoffman, R. R., Mueller, S. T., Klein, G., Litman, J.: Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)
[29]
IEEE Standards Association. (n.d.). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design, First Edition. https://ethicsinaction.ieee.org/
[30]
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635).
[31]
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635). 
[32]
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nature machine intelligence 1(9), 389-399 (2019)
[33]
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nature machine intelligence 1(9), 389-399 (2019)
[34]
Kizilcec, R. F.: How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI conference on human factors in computing systems 2390-2395 (2016)
[35]
Lee, J. D., See, K. A.: Trust in automation: Designing for appropriate reliance. Human Factors 46(1), 50-80 (2004)
[36]
Lee, J. D., Shin, D. H.: Trusting Robots as We Trust Our Fellow Humans? Examining the Calibration of Trust in Human-Robot Interaction. International Journal of Social Robotics 14(3), 545-558 (2022)
[37]
Lee, M. K., Baykal, S. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 acm conference on computer supported cooperative work and social computing 1035-1048 (2017)
[38]
Lee, M. K.: Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5(1), 2053951718758196 (2018)
[39]
Lemonne, E.: Ethics guidelines for trustworthy AI. FUTURIUM-European Commission (2018)
[40]
Leslie, D.: Understanding artificial intelligence ethics and safety. arXiv preprint arXiv:1906.05684 (2019)
[41]
Li, J., Zhao, Y., Chai, J.: Dynamic trust management with trust calibration for effective human–ai collaboration in intelligent systems. Ethics and Information Technology. (2023).
[42]
Lim, B. Y., Dey, A. K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2119-2128 (2009)
[43]
Marcus, G., Davis, E.: Rebooting AI: Building artificial intelligence we can trust. Vintage (2019)
[44]
Marsh, S.: Formalising trust as a computational concept. In Submitted to the Department of Computing Science and Mathematics. University of Stirling (1994)
[45]
Mayer, R. C., Davis, J. H., Schoorman, F. D.: An integrative model of organizational trust. Academy of management review 20(3), 709-734 (1995)
[46]
McKnight, D. H., Chervany, N. L.: What is trust? A conceptual analysis and an interdisciplinary model (2000)
[47]
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
[48]
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267, 1-38 (2019)
[49]
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: Mapping the debate. Big Data & Society 3(2), 2053951716679679 (2016)
[50]
Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447-453 (2019)
[51]
Raghavan, M., Barocas, S., Kleinberg, J., Levy, K.: Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Conference on Fairness, Accountability, and Transparency 469-48, (2020)
[52]
Rahwan, I.: Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology 20(1), 5-14 (2018)
[53]
References
[54]
Regulation, P.: Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (eu) 679, (2016)
[55]
Reinhardt, K. (2023). Trust and trustworthiness in AI ethics. AI and Ethics, 3(3), 735-744. 
[56]
Ribeiro, M. T., Singh, S., Guestrin, C.: Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining 1135-1144 (2016)
[57]
Robinson, S. C.: Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society 63, 101421 (2020)
[58]
Rodriguez Rodriguez, L., Bustamante Orellana, C. E., Chiou, E. K., Huang, L., Cooke, N., Kang, Y.: A review of mathematical models of human trust in automation. Frontiers in Neuroergonomics 4, 1171403 (2023)
[59]
Rossi, F. (2018). Building trust in artificial intelligence. Journal of international affairs, 72(1), 127-134. 
[60]
Rousseau, D. M., Sitkin, S. B., Burt, R. S., Camerer, C.: Not so different after all: A cross-discipline view of trust. Academy of management review 23(3), 393-404 (1998)
[61]
Russell, S.: Provably beneficial artificial intelligence. In 27th International conference on intelligent user interfaces 3-3 (2022)
[62]
Selbst, A. D., Powles, J.: Meaningful information and the right to explanation. International Data Privacy Law 7(4), 233-242 (2017)
[63]
Shahriari, K., Shahriari, M.: IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In 2017 IEEE Canada International Humanitarian Technology Conference (IHTC) 197-201 IEEE (2017)
[64]
Suresh, H., Guttag, J. V.: A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:2102.06680 (2021)
[65]
Tuckute, G., Feather, J., Boebinger, D., McDermott, J. H.: Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. Plos Biology 21(12), e3002366 (2023)
[66]
Vereschak, O., Alizadeh, F., Bailly, G., & Caramiaux, B. (2024, May). Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is Made. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-14). 
[67]
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31, 841 (2017)
[68]
Wallach, H.: Moral machines: Teaching robots right from wrong. Oxford University Press (2017)
[69]
Wallach, W., Marchant, G.: Toward the Agile and Comprehensive International Governance of AI and Robotics [point of view]. Proceedings of the IEEE 107(3), 505-508 (2019)
[70]
Weichhart, M., Mueller, V., Pradhan, A., Lankton, J. P.: Bridging the gap: A trust calibration framework for human–AI collaboration. Frontiers in Robotics and AI (2023)
[71]
Yamamoto, K.: Distrust, innovations and public service:'Projecting'in Seventeenth and early Eighteenth-century England (Doctoral dissertation, University of York) (2009)
[72]
Zhang, J. M., Harman, M., Ma, L., Liu, Y.: Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering 48(1), 1-36 (2020)
[73]
Zidaru, T., Morrow, E. M., Stockley, R.: Ensuring patient and public involvement in the transition to AI‐assisted mental health care: A systematic scoping review and agenda for design justice. Health Expectations 24(4), 1072-1124 (2021)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
TAS '24: Proceedings of the Second International Symposium on Trustworthy Autonomous Systems
September 2024
335 pages
ISBN:9798400709890
DOI:10.1145/3686038
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 September 2024

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

TAS '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 345
    Total Downloads
  • Downloads (Last 12 months)345
  • Downloads (Last 6 weeks)76
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media