Skip to main content

M3d-CAM

A PyTorch Library to Generate 3D Attention Maps for Medical Deep Learning

  • Conference paper
  • First Online:
Bildverarbeitung für die Medizin 2021

Part of the book series: Informatik aktuell ((INFORMAT))

Abstract

Deep learning models achieve state-of-the-art results in a wide array of medical imaging problems. Yet the lack of interpretability of deep neural networks is a primary concern for medical practitioners and poses a considerable barrier before the deployment of such models in clinical practice. Several techniques have been developed for visualizing the decision process of DNNs. However, few implementations are openly available for the popular PyTorch library, and existing implementations are often limited to two-dimensional data and classification models. We present M3d-CAM, an easy easy to use library for generating attention maps of CNN-based PyTorch models for both 2D and 3D data, and applicable to both classification and segmentation models. The attention maps can be generated with multiple methods: Guided Backpropagation, Grad-CAM, Guided Grad-CAM and Grad-CAM++. The maps visualize the regions in the input data that most heavily influence the model prediction at a certain layer. Only a single line of code is sufficient for generating attention maps for a model, making M3d-CAM a plug-and-play solution that requires minimal previous knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hooker S, Erhan D, Kindermans PJ, et al. A benchmark for interpretability methods in deep neural networks. In: Advances in Neural Information Processing Systems; 2019. p. 9737–9748.

    Google Scholar 

  2. Huang X, Kroening D, Ruan W, et al. A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Computer Science Review. 2020;37:100–270.

    Google Scholar 

  3. Xu F, Uszkoreit H, Du Y, et al. Explainable AI: a brief survey on history, research areas, approaches and challenges. In: CCF International Conference on Natural Language Processing and Chinese Computing. Springer; 2019. p. 563–574.

    Google Scholar 

  4. Paszke A, Gross S, Massa F, et al. PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems; 2019. p. 8024–8035.

    Google Scholar 

  5. Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: visual explanations from deep networks via gradient-based localization. Proc IEEE ICCV. 2017; p. 618–626.

    Google Scholar 

  6. Springenberg JT, Dosovitskiy A, Brox T, et al. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:14126806. 2014;.

  7. Chattopadhay A, Sarkar A, Howlader P, et al. Grad-cam++: generalized gradientbased visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE; 2018. p. 839–847.

    Google Scholar 

  8. Linda Wang ZQL, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images; 2020.

    Google Scholar 

  9. Fan DP, Zhou T, Ji GP, et al. Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE TMI. 2020; p. 2626–2637.

    Google Scholar 

  10. Isensee F, Petersen J, Klein A, et al. Abstract: nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation. In: Handels H, Deserno TM, Maier A, et al., editors. Bildverarbeitung für die Medizin 2019; 2019. p. 22–22.

    Google Scholar 

  11. Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv 200311597. 2020;Available from: https://github.com/ieee8023/covid-chestxray-dataset.

  12. Simpson AL, Antonelli M, Bakas S, et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms; 2019.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karol Gotkowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gotkowski, K., Gonzalez, C., Bucher, A., Mukhopadhyay, A. (2021). M3d-CAM. In: Palm, C., Deserno, T.M., Handels, H., Maier, A., Maier-Hein, K., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2021. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-33198-6_52

Download citation

Publish with us

Policies and ethics