Skip to main content

Towards Self-explainable Classifiers and Regressors in Neuroimaging with Normalizing Flows

  • Conference paper
  • First Online:
Machine Learning in Clinical Neuroimaging (MLCN 2021)

Abstract

Deep learning-based regression and classification models are used in most subareas of neuroimaging because of their accuracy and flexibility. While such models achieve state-of-the-art results in many different applications scenarios, their decision-making process is usually difficult to explain. This black box behaviour is problematic when non-technical users like clinicians and patients need to trust them and make decisions based on their results. In this work, we propose to build self-explainable generative classifiers and regressors using a flexible and efficient normalizing flow framework. We directly exploit the invertibility of those normalizing flows to explain the decision-making process in a highly accessible way via consistent and spatially smooth attribution maps and counterfactual images for alternate prediction results. The evaluation using more than 5000 3D MR images highlights the explainability capabilities of the proposed models and shows that they achieve a similar level of accuracy as standard convolutional neural networks for image-based brain age regression and brain sex classification tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://brain-development.org/ixi-dataset/.

  2. 2.

    http://fcon_1000.projects.nitrc.org/indi/retro/dlbs.html.

References

  1. Adeli, E., et al.: Deep learning identifies morphological determinants of sex differences in the pre-adolescent brain. Neuroimage 223, 117293 (2020)

    Article  Google Scholar 

  2. Ardizzone, L., Mackowiak, R., Rother, C., Köthe, U.: Training normalizing flows with the information bottleneck for competitive generative classification. NeurIPS 33 (2020)

    Google Scholar 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  4. Cole, J.H., et al.: Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. Neuroimage 163, 115–124 (2017)

    Article  Google Scholar 

  5. Eitel, F., Ritter, K.: Testing the robustness of attribution methods for convolutional neural networks in MRI-based alzheimer’s disease classification. In: Suzuki, K., et al. (eds.) ML-CDS/IMIMIC -2019. LNCS, vol. 11797, pp. 3–11. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33850-3_1

    Chapter  Google Scholar 

  6. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)

    Google Scholar 

  7. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: ICML, pp. 2376–2384 (2019)

    Google Scholar 

  8. Graziani, M., Andrearczyk, V., Marchand-Maillet, S., Müller, H.: Concept attribution: explaining CNN decisions to physicians. Comput. Biol. Med. 123, 103865 (2020)

    Article  Google Scholar 

  9. Hedman, A.M., van Haren, N.E., Schnack, H.G., Kahn, R.S., Hulshoff Pol, H.E.: Human brain changes across the life span: a review of 56 longitudinal magnetic resonance imaging studies. Human Brain Mapp. 33(8), 1987–2002 (2012)

    Article  Google Scholar 

  10. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? (2017) arXiv:1712.09923

  11. Hwang, S.J., Tao, Z., Kim, W.H., Singh, V.: Conditional recurrent flow: conditional generation of longitudinal samples with applications to neuroimaging. In: CVPR, pp. 10692–10701 (2019)

    Google Scholar 

  12. Isensee, F., et al.: Automated brain extraction of multisequence MRI using artificial neural networks. Human Brain Mapp. 40(17), 4952–4964 (2019)

    Article  Google Scholar 

  13. Jeyakumar, J.V., Noor, J., Cheng, Y.H., Garcia, L., Srivastava, M.: How can i explain this to you? an empirical study of deep neural network explanation methods. NeurIPS 33 (2020)

    Google Scholar 

  14. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: ICML, pp. 2668–2677. PMLR (2018)

    Google Scholar 

  15. Kobyzev, I., Prince, S., Brubaker, M.: Normalizing flows: an introduction and review of current methods. IEEE TPAMI, 1–1 (2020)

    Google Scholar 

  16. LaMontagne, P.J., et al.: Oasis-3: longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and alzheimer disease. medRxiv (2019)

    Google Scholar 

  17. Levakov, G., Rosenthal, G., Shelef, I., Raviv, T.R., Avidan, G.: From a deep learning model back to the brain–identifying regional predictors and their relation to aging. Human Brain Mapp. 41(12), 3235–3252 (2020)

    Article  Google Scholar 

  18. Mackowiak, R., Ardizzone, L., Köthe, U., Rother, C.: Generative classifiers as a basis for trustworthy computer vision. arXiv:2007.15036 (2020)

  19. Narayanaswamy, A., et al.: Scientific discovery by generating counterfactuals using image translation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 273–283. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_27

    Chapter  Google Scholar 

  20. Reyes, M., et al.: On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol. Artif. Intell. 2(3), e190043 (2020)

    Google Scholar 

  21. Rohlfing, T., Zahr, N.M., Sullivan, E.V., Pfefferbaum, A.: The SRI24 multichannel atlas of normal adult human brain structure. Human Brain Mapp. 31(5), 798–819 (2010)

    Article  Google Scholar 

  22. Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021)

    Article  Google Scholar 

  23. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: CVPR, pp. 618–626 (2017)

    Google Scholar 

  24. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 (2013)

  25. Singla, S., Pollack, B., Wallace, S., Batmanghelich, K.: Explaining the black-box smoothly-a counterfactual approach. arXiv:2101.04230 (2021)

  26. Sixt, L., Schuessler, M., Weiß, P., Landgraf, T.: Interpretability through invertibility: a deep convolutional network with ideal counterfactuals and isosurfaces (2021). https://openreview.net/forum?id=8YFhXYe1Ps

  27. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv:1706.03825 (2017)

  28. Völzke, H., et al.: Cohort profile: the study of health in pomerania. Int. J. Epidemiol. 40(2), 294–307 (2011)

    Article  Google Scholar 

  29. Wei, D., Zhuang, K., Chen, Q., Yang, W., Liu, W., Wang, K., Sun, J., Qiu, J.: Structural and functional MRI from a cross-sectional southwest university adult lifespan dataset (sald). BioRxiv, p. 177279 (2017)

    Google Scholar 

  30. Wilms, M., et al.: Bidirectional Modeling and Analysis of Brain Aging with Normalizing Flows. In: Kia, S.M., et al. (eds.) MLCN/RNO-AI -2020. LNCS, vol. 12449, pp. 23–33. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66843-3_3

  31. Zhen, X., Chakraborty, R., Yang, L., Singh, V.: Flow-based generative models for learning manifold to manifold mappings. arXiv:2012.10013 (2020)

  32. Zhou, S.K., et al.: A review of deep learning in medical imaging: image traits, technology trends, case studies with progress highlights, and future promises. arXiv:2008.09104 (2020)

Download references

Acknowledgements

This work was supported by a T. Chen Fong postdoctoral fellowship and the River Fund at Calgary Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Wilms .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wilms, M., Mouches, P., Bannister, J.J., Rajashekar, D., Langner, S., Forkert, N.D. (2021). Towards Self-explainable Classifiers and Regressors in Neuroimaging with Normalizing Flows. In: Abdulkadir, A., et al. Machine Learning in Clinical Neuroimaging. MLCN 2021. Lecture Notes in Computer Science(), vol 13001. Springer, Cham. https://doi.org/10.1007/978-3-030-87586-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87586-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87585-5

  • Online ISBN: 978-3-030-87586-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics