Abstract
It is widely accepted that explainability is a requirement for the ethical use of artificial intelligence (AI) in health care. I challenge this Explainability Imperative (EI) by considering the following question: does the use of epistemically opaque medical AI systems violate existing legal standards for informed consent? If yes, and if the failure to meet such standards can be attributed to epistemic opacity, then explainability is a requirement for AI in healthcare. If not, then based on at least one metric of ethical medical practice (informed consent), explainability is not required for the ethical use of AI in healthcare. First, I show that the use of epistemically opaque AI applications is compatible with meeting accepted legal criteria for informed consent. Second, I argue that human experts are also black boxes with respect to the criteria by which they arrive at a diagnosis. Human experts can nonetheless meet established requirements for informed consent. I conclude that the use of black-box AI systems does not violate patients’ rights to informed consent, and thus, with respect to informed consent, explainability is not required for medical AI.



Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Deep learning is a specific type of artificial intelligence which refers to complex forms of machine learning, like neural networks with several layers. Epistemic opacity and explainability imperatives largely concern deep learning systems.
Throughout this manuscript, when I refer to AI systems, I mean deep learning systems specifically.
I accept that informed consent preserves and upholds certain bioethical values like personal autonomy and non-domination. I also assume that existing legal requirements and guidelines adequately secure informed consent. I set aside broader questions and legitimate concerns about the fundamental ethical value of informed consent or the means by which it is granted.
I will primarily use the terms explainability and transparency, though the literature vacillates between a family of terms including: transparency, interpretability, surveyability, explicability, etc. There have been serious efforts to distinguish between these different terms [19] and to highlight the importance of not conflating these terms (Herzog 2022). For my purposes, they will function in similar ways—to either mitigate or eliminate epistemic opacity in AI applications. As such, I will follow scholars, like Ursin et al. [31], who include these terms under the umbrella concept of “explicability” or “explainability” and focus on “explainability”. The finer distinctions are valuable but beyond the scope of my more general argument that epistemic opacity does not violate patients’ right to informed consent.
See for example: London [18], Zerilli et al. (2019), and Duran and Jongsma (2021).
Some exceptions exist, including diagnostic screening for STDs (such as HIV) and genetic tests which, in many jurisdictions, do require the patients’ consent.
Such records are examples of peer-to-peer explanations of the sort that Holzinger et al. [11] seek to define for AI in the medical domain.
Drusen are yellow deposits under the retina that are made up of lipids and proteins.
Ground truth was based on the determination of retinal specialists with over 5-years experience in diabetic retinopathy grading. The DLS was shown to outperform two trained senior professional graders (non-retinal specialists) with over five years experience by reference to the grading of the retinal specialists. For example, two trained graders and the DLS were given a retinal image to grade. The DLS outperformed the two trained graders because it gave the correct grading more often than the trained graders, where the correct grading was determined by the retinal specialists’ grade.
This is in line with Muller’s (2021) construal of the explainability required by AI regulations (like GDPR) as reflecting demands for justification first and foremost.
I thank Eric Winsberg for bringing this point to my attention.
References
Astromskė, K., Peičius, E., Astromskis, P.: Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI Soc. 36(2), 509–520 (2021)
Carruthers, P.: The Opacity of Mind: An Integrative Theory of Self-Knowledge. OUP Oxford, Oxford (2011)
Char, D.S., Abràmoff, M.D., Feudtner, C.: Identifying ethical considerations for machine learning healthcare applications. Am. J. Bioethics 20(11), 7–17 (2020). https://doi.org/10.1080/15265161.2020.1819469
Cohen, I.G.: Informed consent and medical artificial intelligence: What to tell the patient? SSRN Electron. J. (2020). https://doi.org/10.2139/ssrn.3529576
Dai, L., Wu, L., Li, H., Cai, C., Wu, Q., Kong, H., Liu, R., et al.: A deep learning system for detecting diabetic retinopathy across the disease spectrum. Nat. Commun. 12, 3242 (2021). https://doi.org/10.1038/s41467-021-23458-5
Durán, J.M., Jongsma, K.R.: Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 47(5), 329–335 (2021). https://doi.org/10.1136/medethics-2020-106820
Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist–level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017). https://doi.org/10.1038/nature21056
General Data Protection Regulation (GDPR). General data protection regulation (GDPR) – official legal text. Accessed Jun 3, 2022. https://gdpr-info.eu/
Grote, T., Berens, P.: On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 46(3), 205–211 (2020). https://doi.org/10.1136/medethics-2019-105586
Hegdé, J., Bart, E.: Making expert decisions easier to fathom: on the explainability of visual object recognition expertise. Front Neurosci 12, 670 (2018). https://doi.org/10.3389/fnins.2018.00670
Holzinger, A., Biemann, C., Pattichis, C.S. and Kell, D.B.: What Do We Need to Build Explainable AI Systems for the Medical Domain? Dec 28, 2017. https://doi.org/10.48550/arXiv.1712.09923.
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Data Min. Knowl. Discov. 9(4), e1312 (2019). https://doi.org/10.1002/widm.1312. (Wiley Interdisciplinary Reviews)
Kaminski, M.E.: The right to explanation, explained. Berkeley Technol. Law J. 34(1), 189–218 (2019). https://doi.org/10.15779/Z38TD9N83H
Kempt, H., Heilinger, J.-C., Nagel, S.K.: Relative explainability and double standards in medical decision-making. Ethics Inf. Technol. 24(2), 1–10 (2022). https://doi.org/10.1007/s10676-022-09646-x
Krishnan, M.: Against interpretability: a critical examination of the interpretability problem in machine learning. Philos. Technol. 33(3), 487–502 (2020). https://doi.org/10.1007/s13347-019-00372-9
Kundu, S.: AI in medicine must be explainable. Nat. Med. 27(8), 1328–1328 (2021). https://doi.org/10.1038/s41591-021-01461-z
Lipton, Z.C.: The Mythos of Model Interpretability. Jun 10, 2016. https://doi.org/10.48550/arXiv.1606.03490
London, A.J.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent. Rep. 49(1), 15–21 (2019). https://doi.org/10.1002/hast.973
Mittelstadt, Brent, Chris Russell, and Sandra Wachter. “Explaining Explanations in AI.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 279–88. FAT* ’19. New York, NY, USA: Association for Computing Machinery, 2019. https://doi.org/10.1145/3287560.3287574.
McCoy, L.G., Brenna, C.T.A., Chen, S.S., Vold, K., Das, S.: Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based. J. Clin. Epidemiol. 142, 252–257 (2022). https://doi.org/10.1016/j.jclinepi.2021.11.001
Ophthalmology Eye Exam Chart Note Medical Transcription Sample Reports. Accessed May 15, 2022. https://www.mtexamples.com/ophthalmology-eye-exam-chart-note-medical-transcription-sample-reports/
Ophthalmology SOAP Note Sample Report. Accessed May 15, 2022. https://www.medicaltranscriptionsamplereports.com/ophthalmology-soap-note-sample-report//
Powell, S.: “Medical Record Completion Guidelines,” Aug 24, 2011, 11. https://www.mclaren.org/uploads/public/documents/macomb/documents/medical%20staff%20services/ms%20Medical%20Record%20Completion%20Guidelines.pdf
Caruana, R., Lou, Y., Gehrke, J., Koch, P.: Intelligible Models for HealthCare | Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721–30. Sydney, Australia (2015). https://doi.org/10.1145/2783258.2788613
Sawicki, N.N.: A common law duty to disclose conscience-based limitations on medical practice. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2017. https://papers.ssrn.com/abstract=3038016
Schiff, D., Borenstein, J.: How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA J Ethics 21(2), E138–E145 (2019). https://doi.org/10.1001/amajethics.2019.138
Somashekhar, S.P., Sepúlveda, M.-J., Puglielli, S., Norden, A.D., Shortliffe, E.H., Rohit Kumar, C., Rauthan, A., et al.: Watson for oncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board. Ann. Oncol. 29(2), 418–423 (2018). https://doi.org/10.1093/annonc/mdx781
Ting, D.S.W., Yim-Luicheung, C., Lim, G., Tan, G.S.W., Quang, N.D., Gan, A., Hamzah, H., et al.: Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318(22), 2211–2223 (2017). https://doi.org/10.1001/jama.2017.18152
Uddin, Mohammed, Yujiang Wang, and Marc Woodbury-Smith. 2019. “Artificial Intelligence for Precision Medicine in Neurodevelopmental Disorders.” NPJ Digital Medicine 2 (November): 112. https://doi.org/10.1038/s41746-019-0191-0.
Ursin, F., Timmermann, C., Orzechowski, M., Steger, F.: Diagnosing diabetic retinopathy with artificial intelligence: What information should be included to ensure ethical informed consent? Front. Med. (2021). https://doi.org/10.3389/fmed.2021.695217
Ursin, F., Timmermann, C., Steger, F.: Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary? Bioethics 36(2), 143–153 (2022). https://doi.org/10.1111/bioe.12918
Vincent C. Müller. 2021. “Deep Opacity Undermines Data Protection and Explainable Artificial Intelligence.” In Overcoming Opacity in Machine Learning, 1–21. http://explanations.ai/symposium/AISB21_Opacity_Proceedings.pdf#page=20.
Wadden, J.J.: Defining the undefinable: the black box problem in healthcare artificial intelligence. J. Med. Ethics. (2021). https://doi.org/10.1136/medethics-2021-107529
Wilson, Robin Fretwell. 2016. The Promise of Informed Consent. Edited by I. Glenn Cohen, Allison K. Hoffman, and William M. Sage. Vol. 1. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199366521.013.53.
Funding
The author did not receive support from any organization for the submitted work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kawamleh, S. Against explainability requirements for ethical artificial intelligence in health care. AI Ethics 3, 901–916 (2023). https://doi.org/10.1007/s43681-022-00212-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00212-1