Abstract
An increasing number of AI products for medical imaging solutions are offered to healthcare organizations, but frequently these are considered to be a ‘black-box’, offering only limited insights into the AI model functionality. Therefore, model-agnostic methods are required to provide Explainable AI (XAI) in order to improve clinicians’ trust and thus accelerate adoption. However, there is a current lack of published methods to explain 3D classification models with systematic evaluation for medical imaging applications. Here, the popular explainability method RISE is modified so that, for the first time to the best of our knowledge, it can be applied to 3D medical image classification. The method was assessed using recently proposed guidelines for clinical explainable AI. When different parameters were tested using a 3D CT dataset and a classifier to detect the presence of brain hemorrhage, we found that combining different algorithms to produce 3D occlusion patterns led to better and more reliable explainability results. This was confirmed using both quantitative metrics and interpretability assessment of the 3D saliency heatmaps by a clinical expert.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ahmad, R.: Reviewing the relationship between machines and radiology: the application of artificial intelligence. Acta Radiol. Open 9 (2021)
Rafferty, A., Nenutil, R., Rajan, A.: Explainable artificial intelligence for breast tumour classification: helpful or harmful. In: Reyes, M., Henriques Abreu, P., Cardoso, J. (eds.) iMIMIC 2022. LNCS, vol. 13611, pp. 104–123. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-17976-1_10
Nazir, S., Dickson, D., M., Akram, M.U.: Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput. Biol. Med. 156 (2023)
Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE ICCV, Venice (2017)
Jin, W., Li, X., Fatehi, M., Hamarneh, G.: Guidelines and evaluation of clinical explainable AI in medical image analysis. Med. Image Anal. 84, 102684 (2023)
Petsiuk, V., Das, A., Saenko, K.: RISE: randomized input sampling for explanation of black-box models. In: Proceedings of the BMVC, p. 151. BMVA Press, Durham, UK (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust You?: Explaining the predictions of any classifier. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)
Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Sathish, R., Khare, S., Sheet, D.: Verifiable and energy efficient medical image analysis with quantised self-attentive deep neural networks. In: Albarqouni, S., et al. (eds.) DeCaF 2022, FAIR 2022. LNCS, vol. 13573, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18523-6_17
Goel, K., Sindhgatta, R., Kalra, S., Goel, R., Mutreja, P.: The effect of machine learning explanations on user trust for automated diagnosis of COVID-19. Comput. Biol. Med. 146 (2022)
Cooper, J., Arandjelović, O., Harrison, D.J.: Believe the HiPe: hierarchical perturbation for fast, robust, and model-agnostic saliency mapping. Pattern Recognit. 129, 108743 (2022)
Flanders, A.E., et al.: Construction of a machine learning dataset through collaboration: the RSNA 2019 brain CT hemorrhage challenge. Radiol. Artif. Intell. (2020). https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection
Rockafellar, R.T., Wets, R.J.: Variational Analysis, vol. 317. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02431-3
Heimann, T., van Ginneken, B., Styner, M.A., Arzhaeva., Y., et al.: Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans. Med. Imaging 28, 1251–65 (2009)
Zou, K.H., et al.: Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol. 11, 178–189 (2004)
Acknowledgements
This research was funded by Innovate UK, grant 10033899. It was further supported by the Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z].
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Highton, J., Chong, Q.Z., Crawley, R., Schnabel, J.A., Bhatia, K.K. (2024). Evaluation of Randomized Input Sampling for Explanation (RISE) for 3D XAI - Proof of Concept for Black-Box Brain-Hemorrhage Classification. In: Su, R., Zhang, YD., Frangi, A.F. (eds) Proceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023). MICAD 2023. Lecture Notes in Electrical Engineering, vol 1166. Springer, Singapore. https://doi.org/10.1007/978-981-97-1335-6_4
Download citation
DOI: https://doi.org/10.1007/978-981-97-1335-6_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-1334-9
Online ISBN: 978-981-97-1335-6
eBook Packages: Computer ScienceComputer Science (R0)