Skip to main content

Advertisement

Log in

Improving explainability results of convolutional neural networks in microscopy images

  • S.I.: Technologies of the 4th Industrial Revolution with applications
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Explaining the predictions of neural networks to comprehend which region of an image influences the most its decision has become an imperative prerequisite when classifying medical images. In the case of convolutional neural networks, gradient-weighted class activation mapping is an explainability scheme that is more than often utilized for the unveiling of connections between stimuli and predictions especially in classification tasks that address the determination of the class between distinct objects in an image. However, certain categories of medical imaging such as confocal and histopathology images contain rich and dense information that differs from the cat versus dog paradigm. To further improve the performance of the gradient-weighted class activation mapping technique and the generated visualizations, we propose a segmentation-based explainability scheme that focuses on the common visual characteristics of each segment in an image to provide enhanced visualizations instead of highlighting rectangular regions. The explainability performance was quantified by applying random noise perturbations on microscopy images. The area over perturbation curve is utilized to demonstrate the improvement of the proposed methodology when utilizing the Slic superpixel algorithm against the Grad-CAM technique by an average of 4% for the confocal dataset and 9% for histopathology dataset. The results show that the generated visualizations are more comprehensible to humans than the initial heatmaps and demonstrate improved performance against the original Grad-CAM technique.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Shin D (2021) The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int J Hum Comput Stud 146:102551

    Article  Google Scholar 

  2. Buczynski W, Cuzzolin F, Sahakian BJ (2021) A review of machine learning experiments in equity investment decision-making: why most published research findings do not live up to their promise in real life. Int J Data Sci Anal 2021:1–22

    Google Scholar 

  3. Rudin C (2018) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215

    Article  Google Scholar 

  4. Akula R, Garibay II (2021) Audit and assurance of AI algorithms: a framework to ensure ethical algorithmic practices in artificial intelligence. ArXiv: abs/2107.14046

  5. Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller K (2019) Unmasking clever Hans predictors and assessing what machines really learn. Nat Commun 10

  6. Selvaraju RR, Das A, Vedantam R, Cogswell M, Parikh D, Batra D (2019) Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128:336–359

    Article  Google Scholar 

  7. Adlung L, Cohen Y, Mor U, Elinav E (2021) Machine learning in clinical decision making. Med 2(6):642–665

    Article  Google Scholar 

  8. Kallipolitis A, Stratigos A, Zarras A, Maglogiannis I (2020a) Fully connected visual words for the classification of skin cancer confocal images. In: VISIGRAPP

  9. Graziani M, Palatnik de Souza I, Velasco MMBR, Andrearczyk V (2021) Sharpening local interpretable model-agnostic explanations for histopathology: improved understandability and reliability. In: International conference on medical image computing and computer assisted intervention, Strasbourg

  10. Simoyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034

  11. Smilkov D, Thorat N, Kim B, Viégas F, Wattenberg M (2017) SmoothGrad: removing noise by adding noise. ArXiV abs/1706.03825

  12. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: ECCV

  13. Springenberg JT, Dosovitskiy A, Brox T, Riedmiller MA (2015) Striving for simplicity: the all convolutional net. CoRR abs/1412.6806

  14. Bach S, Binder A, Montavon G, Klauschen F, Müller K, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS 10:e0130140

    Article  Google Scholar 

  15. Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. ArXiV abs/1704.02685

  16. Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. ArXiV abs/1703.01365

  17. Zhou B, Khosla A, Lapedriza À, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2921–2929

  18. Sousa IP, Vellasco MM, Silva EC (2019) (Basel, Switzerland) Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 19

  19. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining

  20. Poceviciute M, Eilertsen G, Lundström C (2020) Survey of XAI in digital pathology. Arxiv abs/2008.06353

  21. Veeling BS, Linmans J, Winkens J, Cohen T, Welling M (2018) Rotation equivariant CNNs for digital pathology. In: Proceedings of the international conference on medical image computing and computer-assisted intervention, Granada, Spain, pp 210–218

  22. Huang Y, Chung AC (2019) CELNet: evidence localization for pathology images using weakly supervised learning. ArXiv abs/1909.07097

  23. Sabol P, Sincak P, Ogawa K, Hartono P (2019) Explainable classifier supporting decision-making for breast cancer diagnosis from histopathological images. In: 2019 International joint conference on neural networks (IJCNN), pp 1–8

  24. Xie P, Zuo K, Zhang Y, Li F, Yin M, Lu K (2019) Interpretable classification from skin cancer histology slides using deep learning: a retrospective multicenter study. ArXiv abs/1904.06156

  25. Kallipolitis A, Stratigos A, Zarras A, Maglogiannis I (2020) Explainable fully connected visual words for the classification of skin cancer confocal images: interpreting the influence of visual words in classifying benign vs malignant pattern. In: 11th Hellenic conference on artificial intelligence

  26. Stutz D, Hermans A, Leibe B (2018) Superpixels: an evaluation of the state-of-the-art. ArxiV

  27. Achanta R, Shaji A, Smith K, Lucchi A, Fua PV, Süsstrunk S (2010) SLIC Superpixels

  28. Felzenszwalb PF, Huttenlocher D (2004) Efficient graph-based image segmentation. Int J Comput Vis 59:167–181

    Article  MATH  Google Scholar 

  29. Salem M, Ibrahim A, Ali HA (2013) Automatic quick-shift method for color image segmentation. In: 2013 8th international conference on computer engineering and systems (ICCES), pp 245–251

  30. Spanhol FA, Oliveira L, Petitjean C, Heutte L (2016) A dataset for breast cancer histopathological image classification. IEEE Trans Biomed Eng TBME 2016:1455–1462

    Article  Google Scholar 

  31. Samek W, Binder A, Montavon G, Lapuschkin S, Müller K (2017) Evaluating the visualization of what a deep neural network has learned. In: IEEE transactions on neural networks and learning systems, pp 2660–2763

  32. Deng J, Dong W, Socher R, Li L-J, Li K, Li F-F (2009) ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255

  33. Springenberg JT, Dosovitskiy A, Brox T, Riedmiller MA (2015) Striving for simplicity: the all convolutional net. CoRR, abs/1412.6806

  34. Kallipolitis A, Revelos K, Maglogiannis I (2021) Ensembling efficientnets for the classification and interpretation of histopathology images. Algorithms

  35. Woerl A, Eckstein M, Geiger J, Wagner D, Daher T, Stenzel P, Fernandez A, Hartmann A, Wand M, Roth W, Foersch S (2021) Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. Eur Urol 78:256–264

    Article  Google Scholar 

  36. Kubach J, Muhlebner-Fahrngruber A, Soylemezoğlu F, Miyata H, Niehusmann P, Honavar M, Rogerio F, Kim S, Aronica E, Garbelli R, Vilz S, Popp A, Walcher S, Neuner C, Scholz M, Kuerten S, Schropp V, Roeder SS, Eichhorn P, Eckstein M, Brehmer A, Kobow K, Coras R, Blumcke I, Jabari S (2020) Same same but different: a web-based deep learning application revealed classifying features for the histopathologic distinction of cortical malformations. Epilepsia 61:421–432

    Article  Google Scholar 

  37. Wang X, Liang XG, Jiang Z, Nguchu BA, Zhou Y, Wang Y, Wang H, Li Y, Zhu Y, Wu F, Gao J, Qiu BE (2019) Decoding and mapping task states of the human brain via deep learning. Hum Brain Mapp 41:1505–1519

    Article  Google Scholar 

  38. Adebayo J, Gilmer J, Muelly M, Goodfellow IJ, Hardt M, Kim B (2018) Sanity checks for saliency maps. NeurIPS

  39. Draelos RL, Carin L (2020) Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks

  40. Chattopadhyay A, Sarkar A, Howlader P, Balasubramanian VN (2017) Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp. 839–847

  41. Fu R, Hu Q, Dong X, Guo Y, Gao Y, Li B (2020) Axiom-based Grad-CAM: towards accurate visualization and explanation of CNNs. ArXiv: abs/2008.02312

  42. Desai S, Ramaswamy HG (2020) Ablation-CAM: visual explanations for deep convolutional network via gradient-free localization. In: 2020 IEEE winter conference on applications of computer vision (WACV), pp 972–980. https://doi.org/10.1109/WACV45572.2020.9093360

  43. Bany Muhammad M, Yeasin M (2021) Eigen-CAM: visual explanations for deep convolutional neural networks. SN Comput Sci 2:47. https://doi.org/10.1007/s42979-021-00449-3

    Article  Google Scholar 

  44. Zormpas-Petridis K, Failmezger H, Raza S, Roxanis I, Jamin Y, Yuan Y (2019) Superpixel-based conditional random fields (SuperCRF): incorporating global and local context for enhanced deep learning in melanoma histopathology. Front Oncol 9

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Athanasios Kallipolitis.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Availability of data and materials

Part of data that support the findings of this study are not openly available due to reasons of sensitivity.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kallipolitis, A., Yfantis, P. & Maglogiannis, I. Improving explainability results of convolutional neural networks in microscopy images. Neural Comput & Applic 35, 21535–21553 (2023). https://doi.org/10.1007/s00521-023-08452-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08452-w

Keywords

Navigation