Skip to main content
Log in

Is explainable artificial intelligence intrinsically valuable?

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

There is general consensus that explainable artificial intelligence (“XAI”) is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the first place. As difficult and important as the challenges are in answering these questions, they are distinct from a third question: why do we want XAI at all? There is vast literature on this question as well, but I wish to explore a different kind of answer. The most obvious way to answer this question is by describing a desirable outcome that would likely be achieved with the right kind of explanation, which would make the explanation valuable instrumentally. That is, XAI is desirable to attain some other value, such as fairness, trust, accountability, or governance. This family of arguments is obviously important, but I argue that explanations are also intrinsically valuable, because unexplainable systems can be dehumanizing. I argue that there are at least three independently valid versions of this kind of argument: an argument from participation, from knowledge, and from actualization. Each of these arguments that XAI is intrinsically valuable is independently compelling, in addition to the more obvious instrumental benefits of XAI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aneesh A (2009) Global labor: algocratic modes of organization. Sociol Theory 27(4):347–370

    Article  Google Scholar 

  • Aristotle J (1984) The complete works of Aristotle: the revised oxford translation. Edited by Jonathan Barnes. Princeton University Press, Princeton (Volume Two. Bollington Series LXXI.2)

    Google Scholar 

  • Ayesha S, Hanif MK, Talib R (2020) Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf Fusion 59:44–58

    Article  Google Scholar 

  • Barabas C, Doyle C, Rubinovitz JB, Dinakar K (2020) Studying Up: Reorienting the study of algorithmic fairness around issues of power. Proceedings of the 2020 Conference on Fairness, Accountiability, and Transparency, pp. 167–176.

  • Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. Proc Mach Learn Res 81:1–15

    Google Scholar 

  • Cheung ASY (2014) Revisting privacy and dignity: online shaming in the Global E-Village. Laws 3(2):301–326

    Article  Google Scholar 

  • DARPA (2016) Explainable Artificial Intelligence (XAI) Program, Broad Agency Announcement Explainable Artificial Intelligence (XAI) DARPA-BAA-16.

  • Danaher J (2016) The threat of algocracy: reality, resistance and accommodation. Philos Technol 29:245–268

    Article  Google Scholar 

  • Domingos P (2015) The master algorithm: how the quest for the ultimate learning machine will remake the world. Perseus books Group, Basic Books imprint, New York

    Google Scholar 

  • Foucault M (1977) Discipline and punish: the birth of the prison. Translated by Alan Sheridan. Pantheon Books, New York

    Google Scholar 

  • Foulds JR, Islam R, Keya KN, Pan S (2020) An Intersectional Definition of Fairness. 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 2020, pp. 1918–1921.

  • Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1

    Article  Google Scholar 

  • Halpern JY, Pearl J (2005) Causes and explanations: a structural-model approach. Part II: Explanations. Br J Philos Sci 56(4):889–911

    Article  Google Scholar 

  • Helbing D (2019) Societal, economic, ethical and legal challenges of the digital revolution:from big data to deep learning, artificial intelligence, and manipulative technologies. In: Helbing D (ed) Towards digital enlightenment. Springer, pp 47–72

    Chapter  Google Scholar 

  • Kafka F (1998) The trial: a new translation based on the revised text. Schocken Books, Inc., New York

    Google Scholar 

  • Kang J (1998) Information privacy in cyberspace transactions. Stanf Law Rev 50(4):1193–1294

  • Kim B, Khanna R, Kovejo O (2016) Examples are not Enough, Learn to Criticize! Criticism for Interpretability. Proceedings of Advances in Neural Information Processing Systems, Barcelona, Spain, 29.

  • Kroll JA, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable algorithms. Univ Pa Law Rev 165:633–699

    Google Scholar 

  • Lipton, Z.C. (2016). The mythos of model interpretability. Presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016) New York

  • Luban D, Strudler A, Wasserman D (1992) Moral responsibility in the age of bureaucracy. Mich Law Rev 90:2348

    Article  Google Scholar 

  • Lyons J, Sadler G, Koltai K, Battiste H, Ho N, Hoffmann L, Smith D, Johnson W, Shively R (2016) Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines. Advances in Human Factors in Robots and Unmanned Systems: Proceedings of the AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, July 27–31, 2016, Orlando, Florida, USA

  • Mercado JE, Rupp MA, Chen JY, Barnes MJ, Barber D, Procci K (2016) Intelligent agent transparency in human–agent teaming for Multi-UxV management. Hum Factors 58(3):401–415

    Article  Google Scholar 

  • Miller T (2018) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  MathSciNet  Google Scholar 

  • Ribeiro MT, Singh S, Guestrin CE (2016) ‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144

  • Rose J (2001). The Unwanted Gaze: The Destruction of Privacy in America. Vintage Books 1st edition

  • Sandra Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Priv Law 7(2):76

    Article  Google Scholar 

  • Schwartz P, Reidenberg J (1996) Data Privacy Law. Michie Publishing

    Google Scholar 

  • Selbst A, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139

    Google Scholar 

  • Slack D, Freidler S, Chitradeep DR, Scheidegger C (2019) Assessing the Local Interpretability of Machine Learning Models. ArXiv 1902.03501. Association for the Advancement of Artificial Intelligence.

  • Solove D (2001) Privacy and Power. Stanf Law Rev 53:1393

    Article  Google Scholar 

  • Tamagnini P, Krause J, Dasgupta A, Bertini E (2017) Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations. In Proceedings of HILDA’17, Chicago, IL, USA.

  • Tyler T (1994) Governing amid diversity: the effect of fair decisionmaking procedures on the legitimacy or government. Law Soc Rev 28(4):809–832

    Article  Google Scholar 

  • United States v. Cruikshank (1876). 92 U.S. 542

  • U.S. Const. amend. (1877) VI

  • Vellido A, Jos´e D Martin-Guerrero, J Lisboa (2012) Making machine learning models interpretable. Proceedings of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, Belgium.

  • Young IM (1990) Justice and the politics of difference. Princeton University Press, Princeton

    Google Scholar 

  • Zhu YN, Sun L, Li C (2019) Inclusive Leadership, Proactive Personality and Employee Voice: A Voice Role Identity Perspective. Proceedings of the Academy of Management Annual Meeting.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nathan Colaner.

Ethics declarations

Conflict of interest

The author declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Colaner, N. Is explainable artificial intelligence intrinsically valuable?. AI & Soc 37, 231–238 (2022). https://doi.org/10.1007/s00146-021-01184-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01184-2

Keywords

Navigation