Skip to main content

Effects of Fairness and Explanation on Trust in Ethical AI

  • Conference paper
  • First Online:
Machine Learning and Knowledge Extraction (CD-MAKE 2022)

Abstract

AI ethics has been a much discussed topic in recent years. Fairness and explainability are two important ethical principles for trustworthy AI. In this paper, the impact of AI explainability and fairness on user trust in AI-assisted decisions is investigated. For this purpose, a user study was conducted simulating AI-assisted decision making in a health insurance scenario. The study results demonstrated that fairness only affects user trust when the fairness level is low, with a low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision making. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations, but also the level of fairness introduced. This is a strong motivation for further work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. European parliament resolution of 20 October 2020 with recommendations to the commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020IP0275. Accessed 19 Jan 2022

  2. Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation) (2016). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:02016R0679-20160504

  3. Alam, L., Mueller, S.: Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Med. Inform. Decis. Mak. 21(1), 178 (2021). https://doi.org/10.1186/s12911-021-01542-6

    Article  Google Scholar 

  4. Article 29 Working Party: Guidelines on automated individual decision-making and profiling for the purposes of regulation 2016/679. https://ec.europa.eu/newsroom/article29/items/612053/en. Accessed 19 Jan 2022

  5. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 0049124118782533 (2018)

    MathSciNet  Google Scholar 

  6. Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI 2019, pp. 258–262 (2019). https://doi.org/10.1145/3301275.3302289

  7. Castelvecchi, D.: Can we open the black box of AI? Nature News 538(7623), 20 (2016)

    Article  Google Scholar 

  8. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K.E., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI 2019, pp. 275–285 (2019)

    Google Scholar 

  9. Duan, Y., Edwards, J.S., Dwivedi, Y.K.: Artificial intelligence for decision making in the era of big data - evolution, challenges and research agenda. Int. J. Inf. Manag. 48, 63–71 (2019). https://doi.org/10.1016/j.ijinfomgt.2019.01.021

    Article  Google Scholar 

  10. Dwivedi, Y.K., et al.: Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 57, 101994 (2021). https://doi.org/10.1016/j.ijinfomgt.2019.08.002

    Article  Google Scholar 

  11. Earle, T.C., Siegrist, M.: On the relation between trust and fairness in environmental risk management. Risk Anal. 28(5), 1395–1414 (2008)

    Article  Google Scholar 

  12. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of KDD 2015, pp. 259–268 (2015)

    Google Scholar 

  13. High-Level Export Group on Artificial Intelligence: Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. Accessed 19 Jan 2022

  14. Holzinger, A.: The next frontier: AI we can really trust. In: Kamp, M. (ed.) ECML PKDD 2021. CCIS, vol. 1524, pp. 1–14. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-93736-2_33

    Chapter  Google Scholar 

  15. Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI - Künstliche Intelligenz 34(2), 193–198 (2020). https://doi.org/10.1007/s13218-020-00636-z

    Article  Google Scholar 

  16. Holzinger, A., et al.: Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf. Fusion 79(3), 263–278 (2022). https://doi.org/10.1016/j.inffus.2021.10.007

    Article  Google Scholar 

  17. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Mining Knowl. Discov. 9(4), 1–13 (2019). https://doi.org/10.1002/widm.1312

    Article  Google Scholar 

  18. Holzinger, A., Malle, B., Saranti, A., Pfeifer, B.: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf. Fusion 71(7), 28–37 (2021). https://doi.org/10.1016/j.inffus.2021.01.008

    Article  Google Scholar 

  19. Holzinger, K., Mak, K., Kieseberg, P., Holzinger, A.: Can we trust machine learning results? Artificial intelligence in safety-critical decision support. ERCIM News 112(1), 42–43 (2018)

    Google Scholar 

  20. Hudec, M., Minarikova, E., Mesiar, R., Saranti, A., Holzinger, A.: Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions. Knowl. Based Syst. 220, 106916 (2021). https://doi.org/10.1016/j.knosys.2021.106916

    Article  Google Scholar 

  21. Kasinidou, M., Kleanthous, S., Barlas, P., Otterbacher, J.: I agree with the decision, but they didn’t deserve this: future developers’ perception of fairness in algorithmic decisions. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2021, pp. 690–700 (2021). https://doi.org/10.1145/3442188.3445931

  22. Kelley, K.H., Fontanetta, L.M., Heintzman, M., Pereira, N.: Artificial intelligence: implications for social inflation and insurance. Risk Manag. Insur. Rev. 21(3), 373–387 (2018). https://doi.org/10.1111/rmir.12111

    Article  Google Scholar 

  23. Kizilcec, R.F.: How much information? Effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI 2016, pp. 2390–2395. Association for Computing Machinery (2016). https://doi.org/10.1145/2858036.2858402

  24. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of ICML 2017, pp. 1885–1894, July 2017

    Google Scholar 

  25. Komodromos, M.: Employees’ perceptions of trust, fairness, and the management of change in three private universities in Cyprus. J. Hum. Resour. Manag. Labor Stud. 2(2), 35–54 (2014)

    Google Scholar 

  26. Larasati, R., Liddo, A.D., Motta, E.: The effect of explanation styles on user’s trust. In: Proceedings of the Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies co-located with IUI 2020, pp. 1–6 (2020)

    Google Scholar 

  27. Merritt, S.M., Heimbaugh, H., LaChapell, J., Lee, D.: I trust it, but i don’t know why: effects of implicit attitudes toward automation on trust in an automated system. Hum. Factors 55(3), 520–534 (2013)

    Article  Google Scholar 

  28. Nikbin, D., Ismail, I., Marimuthu, M., Abu-Jarad, I.: The effects of perceived service fairness on satisfaction, trust, and behavioural intentions. Singap. Manag. Rev. 33(2), 58–73 (2011)

    Google Scholar 

  29. Papenmeier, A., Englebienne, G., Seifert, C.: How model accuracy and explanation fidelity influence user trust. In: IJCAI 2019 Workshop on Explainable Artificial Intelligence (xAI), pp. 1–7, August 2019

    Google Scholar 

  30. Pieters, W.: Explanation and trust: what to tell the user in security and AI? Ethics Inf. Technol. 13(1), 53–64 (2011). https://doi.org/10.1007/s10676-010-9253-3

    Article  Google Scholar 

  31. Renkl, A., Hilbert, T., Schworm, S.: Example-based learning in heuristic domains: a cognitive load theory account. Educ. Psychol. Rev. 21, 67–78 (2009). https://doi.org/10.1007/s10648-008-9093-4

    Article  Google Scholar 

  32. Roy, S.K., Devlin, J.F., Sekhon, H.: The impact of fairness on trustworthiness and trust in banking. J. Mark. Manag. 31(9–10), 996–1017 (2015)

    Article  Google Scholar 

  33. Shin, D.: User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability. J. Broadcast. Electron. Media 64(4), 541–565 (2020). https://doi.org/10.1080/08838151.2020.1843357

    Article  Google Scholar 

  34. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum. Comput. Stud. 146, 102551 (2021). https://doi.org/10.1016/j.ijhcs.2020.102551

    Article  Google Scholar 

  35. Starke, C., Baleis, J., Keller, B., Marcinkowski, F.: Fairness perceptions of algorithmic decision-making: a systematic review of the empirical literature (2021)

    Google Scholar 

  36. Stoeger, K., Schneeberger, D., Kieseberg, P., Holzinger, A.: Legal aspects of data cleansing in medical AI. Comput. Law Secur. Rev. 42, 105587 (2021). https://doi.org/10.1016/j.clsr.2021.105587

    Article  Google Scholar 

  37. Wang, X., Yin, M.: Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making. In: Proceedings of 26th International Conference on Intelligent User Interfaces, pp. 318–328. ACM (2021)

    Google Scholar 

  38. Yin, M., Vaughan, J.W., Wallach, H.: Does stated accuracy affect trust in machine learning algorithms? In: Proceedings of ICML2018 Workshop on Human Interpretability in Machine Learning (WHI 2018), pp. 1–2 (2018)

    Google Scholar 

  39. Zhang, Y., Liao, Q.V., Bellamy, R.K.E.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 295–305 (2020)

    Google Scholar 

  40. Zhou, J., Arshad, S.Z., Luo, S., Chen, F.: Effects of uncertainty and cognitive load on user trust in predictive decision making. In: Bernhaupt, R., Dalvi, G., Joshi, A., Balkrishan, D.K., O’Neill, J., Winckler, M. (eds.) INTERACT 2017. LNCS, vol. 10516, pp. 23–39. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68059-0_2

    Chapter  Google Scholar 

  41. Zhou, J., Bridon, C., Chen, F., Khawaji, A., Wang, Y.: Be informed and be involved: effects of uncertainty and correlation on user’s confidence in decision making. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2015, pp. 923–928. Association for Computing Machinery (2015). https://doi.org/10.1145/2702613.2732769

  42. Zhou, J., Chen, F.: 2D transparency space—bring domain users and machine learning experts together. In: Zhou, J., Chen, F. (eds.) Human and Machine Learning. HIS, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90403-0_1

    Chapter  Google Scholar 

  43. Zhou, J., Chen, F. (eds.): Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent. HIS, Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90403-0

    Book  Google Scholar 

  44. Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021)

    Article  Google Scholar 

  45. Zhou, J., Hu, H., Li, Z., Yu, K., Chen, F.: Physiological indicators for user trust in machine learning with influence enhanced fact-checking. In: Machine Learning and Knowledge Extraction, pp. 94–113 (2019)

    Google Scholar 

  46. Zhou, J., Khawaja, M.A., Li, Z., Sun, J., Wang, Y., Chen, F.: Making machine learning useable by revealing internal states update - a transparent approach. Int. J. Comput. Sci. Eng. 13(4), 378–389 (2016)

    Google Scholar 

  47. Zhou, J., et al.: Measurable decision making with GSR and pupillary analysis for intelligent user interface. ACM Trans. Comput. Hum. Interact. 21(6), 1–23 (2015). https://doi.org/10.1145/2687924

    Article  Google Scholar 

  48. Zhou, J., Verma, S., Mittal, M., Chen, F.: Understanding relations between perception of fairness and trust in algorithmic decision making. In: Proceedings of the International Conference on Behavioral and Social Computing (BESC 2021), pp. 1–5, October 2021

    Google Scholar 

Download references

Acknowledgements

This work does not raise any ethical issues. Parts of this work have been funded by the Austrian Science Fund (FWF), Project: P-32554 explainable Artificial Intelligence; and by the Australian UTS STEM-HASS Strategic Research Fund 2021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianlong Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Angerschmid, A., Theuermann, K., Holzinger, A., Chen, F., Zhou, J. (2022). Effects of Fairness and Explanation on Trust in Ethical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2022. Lecture Notes in Computer Science, vol 13480. Springer, Cham. https://doi.org/10.1007/978-3-031-14463-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14463-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14462-2

  • Online ISBN: 978-3-031-14463-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics