Skip to main content

Evaluation of Randomized Input Sampling for Explanation (RISE) for 3D XAI - Proof of Concept for Black-Box Brain-Hemorrhage Classification

  • Conference paper
  • First Online:
Proceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023) (MICAD 2023)

Abstract

An increasing number of AI products for medical imaging solutions are offered to healthcare organizations, but frequently these are considered to be a ‘black-box’, offering only limited insights into the AI model functionality. Therefore, model-agnostic methods are required to provide Explainable AI (XAI) in order to improve clinicians’ trust and thus accelerate adoption. However, there is a current lack of published methods to explain 3D classification models with systematic evaluation for medical imaging applications. Here, the popular explainability method RISE is modified so that, for the first time to the best of our knowledge, it can be applied to 3D medical image classification. The method was assessed using recently proposed guidelines for clinical explainable AI. When different parameters were tested using a 3D CT dataset and a classifier to detect the presence of brain hemorrhage, we found that combining different algorithms to produce 3D occlusion patterns led to better and more reliable explainability results. This was confirmed using both quantitative metrics and interpretability assessment of the 3D saliency heatmaps by a clinical expert.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ahmad, R.: Reviewing the relationship between machines and radiology: the application of artificial intelligence. Acta Radiol. Open 9 (2021)

    Google Scholar 

  2. Rafferty, A., Nenutil, R., Rajan, A.: Explainable artificial intelligence for breast tumour classification: helpful or harmful. In: Reyes, M., Henriques Abreu, P., Cardoso, J. (eds.) iMIMIC 2022. LNCS, vol. 13611, pp. 104–123. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-17976-1_10

  3. Nazir, S., Dickson, D., M., Akram, M.U.: Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput. Biol. Med. 156 (2023)

    Google Scholar 

  4. Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE ICCV, Venice (2017)

    Google Scholar 

  5. Jin, W., Li, X., Fatehi, M., Hamarneh, G.: Guidelines and evaluation of clinical explainable AI in medical image analysis. Med. Image Anal. 84, 102684 (2023)

    Google Scholar 

  6. Petsiuk, V., Das, A., Saenko, K.: RISE: randomized input sampling for explanation of black-box models. In: Proceedings of the BMVC, p. 151. BMVA Press, Durham, UK (2018)

    Google Scholar 

  7. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust You?: Explaining the predictions of any classifier. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  8. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  9. Sathish, R., Khare, S., Sheet, D.: Verifiable and energy efficient medical image analysis with quantised self-attentive deep neural networks. In: Albarqouni, S., et al. (eds.) DeCaF 2022, FAIR 2022. LNCS, vol. 13573, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18523-6_17

  10. Goel, K., Sindhgatta, R., Kalra, S., Goel, R., Mutreja, P.: The effect of machine learning explanations on user trust for automated diagnosis of COVID-19. Comput. Biol. Med. 146 (2022)

    Google Scholar 

  11. Cooper, J., Arandjelović, O., Harrison, D.J.: Believe the HiPe: hierarchical perturbation for fast, robust, and model-agnostic saliency mapping. Pattern Recognit. 129, 108743 (2022)

    Article  Google Scholar 

  12. Flanders, A.E., et al.: Construction of a machine learning dataset through collaboration: the RSNA 2019 brain CT hemorrhage challenge. Radiol. Artif. Intell. (2020). https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection

  13. Rockafellar, R.T., Wets, R.J.: Variational Analysis, vol. 317. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02431-3

  14. Heimann, T., van Ginneken, B., Styner, M.A., Arzhaeva., Y., et al.: Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans. Med. Imaging 28, 1251–65 (2009)

    Google Scholar 

  15. Zou, K.H., et al.: Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol. 11, 178–189 (2004)

    Google Scholar 

Download references

Acknowledgements

This research was funded by Innovate UK, grant 10033899. It was further supported by the Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jack Highton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Highton, J., Chong, Q.Z., Crawley, R., Schnabel, J.A., Bhatia, K.K. (2024). Evaluation of Randomized Input Sampling for Explanation (RISE) for 3D XAI - Proof of Concept for Black-Box Brain-Hemorrhage Classification. In: Su, R., Zhang, YD., Frangi, A.F. (eds) Proceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023). MICAD 2023. Lecture Notes in Electrical Engineering, vol 1166. Springer, Singapore. https://doi.org/10.1007/978-981-97-1335-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-1335-6_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-1334-9

  • Online ISBN: 978-981-97-1335-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics