Skip to main content

Designing User-Centric Explanations for Medical Imaging with Informed Machine Learning

  • Conference paper
  • First Online:
Design Science Research for a New Society: Society 5.0 (DESRIST 2023)

Abstract

A flawed algorithm released in clinical practice can cause unintended harm to patient health. Risks, regulation, responsibility, and ethics shape the demand of clinical users to understand and rely on the outputs made by artificial intelligence. Explainable artificial intelligence (XAI) offers methods to render a model’s behavior understandable from different perspectives. Extant XAI, however, is mainly data-driven and designed to meet developers’ demands to correct models rather than clinical users’ expectations to reflect clinically relevant information. To this end, informed machine learning (IML) utilizes prior knowledge jointly with data to generate predictions, a promising paradigm to enrich XAI with medical knowledge. To explore how IML can be used to generate explanations that are congruent to clinical users’ demands and useful to medical decision-making, we conduct Action Design Research (ADR) in collaboration with a team of radiologists. We propose an IML-based XAI system for clinically relevant explanations of diagnostic imaging predictions. With the help of ADR, we reduce the gap between implementation and user evaluation and demonstrate the effectiveness of the system in a real-world application with clinicians. While we develop design principles of using IML for user-centric XAI in diagnostic imaging, the study demonstrates that an IML-based design adequately reflects clinicians’ conceptions. In this way, IML inspires greater understandability and trustworthiness of AI-enabled diagnostic imaging.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/facebookresearch/ConvNeXt.

  2. 2.

    https://wiki.cancerimagingarchive.net/x/rgAe.

References

  1. Pumplun, L., Fecho, M., Islam, N., Buxmann, P.: Machine learning systems in clinics – how mature is the adoption process in medical diagnostics? In: Proceedings of the 54th Hawaii International Conference on System Sciences (2021)

    Google Scholar 

  2. Johnson, M., Albizri, A., Harfouche, A.: Responsible artificial intelligence in healthcare: predicting and preventing insurance claim denials for economic and social wellbeing. Inf. Syst. Front. (2021)

    Google Scholar 

  3. Topol, E.J.: High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25(1), 44–56 (2019)

    Article  Google Scholar 

  4. Wiens, J., et al.: Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25(9), 1337–1340 (2019)

    Article  Google Scholar 

  5. Arbelaez Ossa, L., Starke, G., Lorenzini, G., Vogt, J.E., Shaw, D.M., Elger, B.S.: Re-focusing explainability in medicine. Digital Health 8 (2022)

    Google Scholar 

  6. Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 113 (2021)

    Google Scholar 

  7. Payrovnaziri, S.N., et al.: Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. JAMIA 27(7), 1173–1185 (2020)

    Google Scholar 

  8. Fernandez-Quilez, A.: Deep learning in radiology: ethics of data and on the value of algorithm transparency, interpretability and explainability. AI Ethics 3(1), 257–265 (2022)

    Article  Google Scholar 

  9. Jacobs, M., et al.: Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In: Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T., Bjørn, P., Drucker, S. (eds.) Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM, New York (2021)

    Google Scholar 

  10. Li, X., Qian, B., Wei, J., Zhang, X., Chen, S., Zheng, Q.: Domain knowledge guided deep atrial fibrillation classification and its visual interpretation. In: Zhu, W., et al. (eds.) International Conference on Information and Knowledge Management, pp. 129–138. ACM, New York (2019)

    Google Scholar 

  11. Ribera, M., Lapedriza, A.: Can we do better explanations? A proposal of user-centered explainable AI. In: Proceedings of the IUI Workshops. ACM, New York (2019)

    Google Scholar 

  12. Bauer, K., Hinz, O., van der Aalst, W., Weinhardt, C.: Expl(AI)n it to me – explainable AI and information systems research. Bus. Inf. Syst. Eng. 63(2), 79–82 (2021). https://doi.org/10.1007/s12599-021-00683-2

    Article  Google Scholar 

  13. Gaur, M., Faldu, K., Sheth, A.: Semantics of the black-box: can knowledge graphs help make deep learning systems more interpretable and explainable? IEEE Internet Comput. 25(1), 51–59 (2021)

    Article  Google Scholar 

  14. Beckh, K., et al.: Explainable Machine Learning with Prior Knowledge (2021)

    Google Scholar 

  15. von Rueden, L., et al.: Informed machine learning - a taxonomy and survey of integrating prior knowledge into learning systems. IEEE Trans. Knowl. Data Eng. 35(1), 614–633 (2021)

    Google Scholar 

  16. Doshi-Velez, F., Kim, B.: Considerations for evaluation and generalization in interpretable machine learning. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_1

    Chapter  Google Scholar 

  17. Sein, M.K., Henfridsson, O., Purao, S., Rossi, M., Lindgren, R.: Action design research. MIS Q. 35(1), 37–56 (2011)

    Google Scholar 

  18. Mullarkey, M.T., Hevner, A.R.: An elaborated action design research process model. EJIS 28(1), 6–20 (2019)

    Google Scholar 

  19. Fernández-Loría, C., Provost, F., Han, X.: Explaining data-driven decisions made by AI systems: the counterfactual approach. MIS Q. 46(3), 1635–1660 (2022)

    Article  Google Scholar 

  20. Salahuddin, Z., Woodruff, H.C., Chatterjee, A., Lambin, P.: Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput. Biol. Med. 140, 105111 (2021)

    Google Scholar 

  21. Cheng, J.-Z., et al.: Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci. Rep. 6, 1–13 (2016)

    Google Scholar 

  22. Rajpurkar, P., Chen, E., Banerjee, O., Topol, E.J.: AI in health and medicine. Nat. Med. 28(1), 31–38 (2022)

    Article  Google Scholar 

  23. Hancock, M.C., Magnan, J.F.: Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms. J. Med. Imaging 3(4), 044504 (2016)

    Google Scholar 

  24. Grüning, M., Trenz, M.: Me, you and AI ‐ managing human AI collaboration in computer aided intelligent diagnosis. In: SIGHCI 2021 Proceedings (2021)

    Google Scholar 

  25. Hinsen, S., Hofmann, P., Jöhnk, J., Urbach, N.: How can organizations design purposeful human-AI interactions: a practical perspective from existing use cases and interviews. In: Proceedings of the 55th Hawaii International Conference on System Sciences (2022)

    Google Scholar 

  26. Alam, L., Mueller, S.: Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Med. Inform. Decis. Making 21(1), 178 (2021)

    Google Scholar 

  27. Braun, M., Harnischmacher, C., Lechte, H., Riquel, J.: Let’s get physic(AI)l - transforming AI-requirements of healthcare into design principles. In: ECIS 2022 (2022)

    Google Scholar 

  28. Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 76, 89–106 (2021)

    Article  Google Scholar 

  29. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2021)

    Google Scholar 

  30. Oberste, L., Heinzl, A.: User-centric explainability in healthcare: a knowledge-level perspective of informed machine learning. IEEE Trans. Artif. Intell. 1–18 (2022)

    Google Scholar 

  31. Saporta, A., et al.: Benchmarking saliency methods for chest X-ray interpretation. Nat Mach Intell 4(10), 867–878 (2022)

    Article  Google Scholar 

  32. Li, X.-H., et al.: A survey of data-driven and knowledge-aware explainable AI. IEEE Trans. Knowl. Data Eng. (2020)

    Google Scholar 

  33. Ghassemi, M., Oakden-Rayner, L., Beam, A.L.: The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3(11) (2021)

    Google Scholar 

  34. Zihni, E., et al.: Opening the black box of artificial intelligence for clinical decision support: a study predicting stroke outcome. PLoS ONE 15(4) (2020)

    Google Scholar 

  35. Sun, Z., Dong, W., Shi, J., Huang, Z.: Interpretable Disease Prediction based on Reinforcement Path Reasoning over Knowledge Graphs (2020)

    Google Scholar 

  36. Choi, E., Bahadori, M.T., Song, L., Stewart, W.F., Sun, J.: GRAM: graph-based attention model for healthcare representation learning. In: ACM SIGKDD, pp. 787–795 (2017)

    Google Scholar 

  37. Deng, C., Ji, X., Rainey, C., Zhang, J., Lu, W.: Integrating machine learning with human knowledge. iScience 23(11) (2020)

    Google Scholar 

  38. Lahav, O., Mastronarde, N., van der Schaar, M.: What is interpretable? Using machine learning to design interpretable decision-support systems (2018)

    Google Scholar 

  39. Lebovitz, S.: Diagnostic doubt and artificial intelligence: an inductive field study of radiology work. In: ICIS 2019 Proceedings (2019)

    Google Scholar 

  40. Tonekaboni, S., Joshi, S., McCradden, M.D., Goldenberg, A.: What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use (2019)

    Google Scholar 

  41. Evans, T., et al.: The explainability paradox: challenges for xAI in digital pathology. Futur. Gener. Comput. Syst. 133, 281–296 (2022)

    Article  Google Scholar 

  42. Pazzani, M., Soltani, S., Kaufman, R., Qian, S., Hsiao, A.: Expert-informed, user-centric explanations for machine learning. In: AAAI, vol. 36, no. 11, pp. 12280–12286 (2022)

    Google Scholar 

  43. Das, A., Rad, P.: Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey (2020)

    Google Scholar 

  44. vom Brocke, J., Winter, R., Hevner, A.R., Maedche, A.: Special issue editorial –accumulation and evolution of design knowledge in design science research: a journey through time and space. JAIS 21(3), 520–544 (2020)

    Google Scholar 

  45. Peffers, K., Tuunanen, T., Niehaves, B.: Design science research genres: introduction to the special issue on exemplars and criteria for applicable design science research. EJIS 27(2), 129–139 (2018)

    Google Scholar 

  46. Chari, S., Seneviratne, O., Gruen, D.M., Foreman, M.A., Das, A.K., McGuinness, D.L.: Explanation ontology: a model of explanations for user-centered AI. In: Pan, J.Z., et al. (eds.) ISWC 2020. LNCS, vol. 12507, pp. 228–243. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62466-8_15

    Chapter  Google Scholar 

  47. Gilpin, L.H., Testart, C., Fruchter, N., Adebayo, J.: Explaining Explanations to Society (2019)

    Google Scholar 

  48. Möller, F., Guggenberger, T.M., Otto, B.: Towards a method for design principle development in information systems. In: Hofmann, S., Müller, O., Rossi, M. (eds.) DESRIST 2020. LNCS, vol. 12388, pp. 208–220. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64823-7_20

    Chapter  Google Scholar 

  49. Chandra, L., Seidel, S., Gregor, S.: Prescriptive knowledge in IS research: conceptualizing design principles in terms of materiality, action, and boundary conditions. In: Proceedings of the 48th Hawaii International Conference on System Sciences, pp. 4039–4048 (2015)

    Google Scholar 

  50. Jassim, M.M., Jaber, M.M.: Systematic review for lung cancer detection and lung nodule classification: taxonomy, challenges, and recommendation future works. J. Intell. Syst. 31(1), 944–964 (2022)

    Google Scholar 

  51. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3319–3327 (2017)

    Google Scholar 

  52. LaLonde, R., Torigian, D., Bagci, U.: Encoding visual attributes in capsules for explainable medical diagnoses. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 294–304. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_29

    Chapter  Google Scholar 

  53. Murabayashi, S., Iyatomi, H.: Towards explainable melanoma diagnosis: prediction of clinical indicators using semi-supervised and multi-task learning. In: International Conference on Big Data, pp. 4853–4857. IEEE (2019)

    Google Scholar 

  54. Lucieri, A., Dengel, A., Ahmed, S.: Deep learning based decision support for medicine—a case study on skin cancer diagnosis (2021)

    Google Scholar 

  55. Shen, S., Han, S.X., Aberle, D.R., Bui, A.A., Hsu, W.: An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst. Appl. 128, 84–95 (2019)

    Article  Google Scholar 

  56. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th ICML, vol. 70, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  57. Wen, J., et al.: Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation. Med. Image Anal. 63, 101694 (2020)

    Article  Google Scholar 

  58. Wu, J., Qian, T.: A survey of pulmonary nodule detection, segmentation and classification in computed tomography with deep learning techniques. J. Med. Artif. Intell. 2, 1–12 (2019)

    Article  Google Scholar 

  59. Dyrba, M., Hanzig, M., Altenstein, S., Bader, S., Ballarini, T., Brosseron, F., Buerger, K., et al.: Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer’s disease. Alzheimer’s Res. Ther. 13(1), 1–18 (2021)

    Google Scholar 

  60. Pintelas, E., Livieris, I.E., Pintelas, P.: A grey-box ensemble model exploiting black-box accuracy and white-box intrinsic interpretability. Algorithms 13(1), 17 (2020)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis Oberste .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oberste, L., Rüffer, F., Aydingül, O., Rink, J., Heinzl, A. (2023). Designing User-Centric Explanations for Medical Imaging with Informed Machine Learning. In: Gerber, A., Baskerville, R. (eds) Design Science Research for a New Society: Society 5.0. DESRIST 2023. Lecture Notes in Computer Science, vol 13873. Springer, Cham. https://doi.org/10.1007/978-3-031-32808-4_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-32808-4_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-32807-7

  • Online ISBN: 978-3-031-32808-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics