Abstract
Explaining the predictions of neural networks to comprehend which region of an image influences the most its decision has become an imperative prerequisite when classifying medical images. In the case of convolutional neural networks, gradient-weighted class activation mapping is an explainability scheme that is more than often utilized for the unveiling of connections between stimuli and predictions especially in classification tasks that address the determination of the class between distinct objects in an image. However, certain categories of medical imaging such as confocal and histopathology images contain rich and dense information that differs from the cat versus dog paradigm. To further improve the performance of the gradient-weighted class activation mapping technique and the generated visualizations, we propose a segmentation-based explainability scheme that focuses on the common visual characteristics of each segment in an image to provide enhanced visualizations instead of highlighting rectangular regions. The explainability performance was quantified by applying random noise perturbations on microscopy images. The area over perturbation curve is utilized to demonstrate the improvement of the proposed methodology when utilizing the Slic superpixel algorithm against the Grad-CAM technique by an average of 4% for the confocal dataset and 9% for histopathology dataset. The results show that the generated visualizations are more comprehensible to humans than the initial heatmaps and demonstrate improved performance against the original Grad-CAM technique.

















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Shin D (2021) The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int J Hum Comput Stud 146:102551
Buczynski W, Cuzzolin F, Sahakian BJ (2021) A review of machine learning experiments in equity investment decision-making: why most published research findings do not live up to their promise in real life. Int J Data Sci Anal 2021:1–22
Rudin C (2018) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215
Akula R, Garibay II (2021) Audit and assurance of AI algorithms: a framework to ensure ethical algorithmic practices in artificial intelligence. ArXiv: abs/2107.14046
Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller K (2019) Unmasking clever Hans predictors and assessing what machines really learn. Nat Commun 10
Selvaraju RR, Das A, Vedantam R, Cogswell M, Parikh D, Batra D (2019) Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128:336–359
Adlung L, Cohen Y, Mor U, Elinav E (2021) Machine learning in clinical decision making. Med 2(6):642–665
Kallipolitis A, Stratigos A, Zarras A, Maglogiannis I (2020a) Fully connected visual words for the classification of skin cancer confocal images. In: VISIGRAPP
Graziani M, Palatnik de Souza I, Velasco MMBR, Andrearczyk V (2021) Sharpening local interpretable model-agnostic explanations for histopathology: improved understandability and reliability. In: International conference on medical image computing and computer assisted intervention, Strasbourg
Simoyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: visualising image classification models and saliency maps. CoRR abs/1312.6034
Smilkov D, Thorat N, Kim B, Viégas F, Wattenberg M (2017) SmoothGrad: removing noise by adding noise. ArXiV abs/1706.03825
Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: ECCV
Springenberg JT, Dosovitskiy A, Brox T, Riedmiller MA (2015) Striving for simplicity: the all convolutional net. CoRR abs/1412.6806
Bach S, Binder A, Montavon G, Klauschen F, Müller K, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS 10:e0130140
Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. ArXiV abs/1704.02685
Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. ArXiV abs/1703.01365
Zhou B, Khosla A, Lapedriza À, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2921–2929
Sousa IP, Vellasco MM, Silva EC (2019) (Basel, Switzerland) Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors 19
Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining
Poceviciute M, Eilertsen G, Lundström C (2020) Survey of XAI in digital pathology. Arxiv abs/2008.06353
Veeling BS, Linmans J, Winkens J, Cohen T, Welling M (2018) Rotation equivariant CNNs for digital pathology. In: Proceedings of the international conference on medical image computing and computer-assisted intervention, Granada, Spain, pp 210–218
Huang Y, Chung AC (2019) CELNet: evidence localization for pathology images using weakly supervised learning. ArXiv abs/1909.07097
Sabol P, Sincak P, Ogawa K, Hartono P (2019) Explainable classifier supporting decision-making for breast cancer diagnosis from histopathological images. In: 2019 International joint conference on neural networks (IJCNN), pp 1–8
Xie P, Zuo K, Zhang Y, Li F, Yin M, Lu K (2019) Interpretable classification from skin cancer histology slides using deep learning: a retrospective multicenter study. ArXiv abs/1904.06156
Kallipolitis A, Stratigos A, Zarras A, Maglogiannis I (2020) Explainable fully connected visual words for the classification of skin cancer confocal images: interpreting the influence of visual words in classifying benign vs malignant pattern. In: 11th Hellenic conference on artificial intelligence
Stutz D, Hermans A, Leibe B (2018) Superpixels: an evaluation of the state-of-the-art. ArxiV
Achanta R, Shaji A, Smith K, Lucchi A, Fua PV, Süsstrunk S (2010) SLIC Superpixels
Felzenszwalb PF, Huttenlocher D (2004) Efficient graph-based image segmentation. Int J Comput Vis 59:167–181
Salem M, Ibrahim A, Ali HA (2013) Automatic quick-shift method for color image segmentation. In: 2013 8th international conference on computer engineering and systems (ICCES), pp 245–251
Spanhol FA, Oliveira L, Petitjean C, Heutte L (2016) A dataset for breast cancer histopathological image classification. IEEE Trans Biomed Eng TBME 2016:1455–1462
Samek W, Binder A, Montavon G, Lapuschkin S, Müller K (2017) Evaluating the visualization of what a deep neural network has learned. In: IEEE transactions on neural networks and learning systems, pp 2660–2763
Deng J, Dong W, Socher R, Li L-J, Li K, Li F-F (2009) ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255
Springenberg JT, Dosovitskiy A, Brox T, Riedmiller MA (2015) Striving for simplicity: the all convolutional net. CoRR, abs/1412.6806
Kallipolitis A, Revelos K, Maglogiannis I (2021) Ensembling efficientnets for the classification and interpretation of histopathology images. Algorithms
Woerl A, Eckstein M, Geiger J, Wagner D, Daher T, Stenzel P, Fernandez A, Hartmann A, Wand M, Roth W, Foersch S (2021) Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides. Eur Urol 78:256–264
Kubach J, Muhlebner-Fahrngruber A, Soylemezoğlu F, Miyata H, Niehusmann P, Honavar M, Rogerio F, Kim S, Aronica E, Garbelli R, Vilz S, Popp A, Walcher S, Neuner C, Scholz M, Kuerten S, Schropp V, Roeder SS, Eichhorn P, Eckstein M, Brehmer A, Kobow K, Coras R, Blumcke I, Jabari S (2020) Same same but different: a web-based deep learning application revealed classifying features for the histopathologic distinction of cortical malformations. Epilepsia 61:421–432
Wang X, Liang XG, Jiang Z, Nguchu BA, Zhou Y, Wang Y, Wang H, Li Y, Zhu Y, Wu F, Gao J, Qiu BE (2019) Decoding and mapping task states of the human brain via deep learning. Hum Brain Mapp 41:1505–1519
Adebayo J, Gilmer J, Muelly M, Goodfellow IJ, Hardt M, Kim B (2018) Sanity checks for saliency maps. NeurIPS
Draelos RL, Carin L (2020) Use HiResCAM instead of Grad-CAM for faithful explanations of convolutional neural networks
Chattopadhyay A, Sarkar A, Howlader P, Balasubramanian VN (2017) Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp. 839–847
Fu R, Hu Q, Dong X, Guo Y, Gao Y, Li B (2020) Axiom-based Grad-CAM: towards accurate visualization and explanation of CNNs. ArXiv: abs/2008.02312
Desai S, Ramaswamy HG (2020) Ablation-CAM: visual explanations for deep convolutional network via gradient-free localization. In: 2020 IEEE winter conference on applications of computer vision (WACV), pp 972–980. https://doi.org/10.1109/WACV45572.2020.9093360
Bany Muhammad M, Yeasin M (2021) Eigen-CAM: visual explanations for deep convolutional neural networks. SN Comput Sci 2:47. https://doi.org/10.1007/s42979-021-00449-3
Zormpas-Petridis K, Failmezger H, Raza S, Roxanis I, Jamin Y, Yuan Y (2019) Superpixel-based conditional random fields (SuperCRF): incorporating global and local context for enhanced deep learning in melanoma histopathology. Front Oncol 9
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Availability of data and materials
Part of data that support the findings of this study are not openly available due to reasons of sensitivity.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kallipolitis, A., Yfantis, P. & Maglogiannis, I. Improving explainability results of convolutional neural networks in microscopy images. Neural Comput & Applic 35, 21535–21553 (2023). https://doi.org/10.1007/s00521-023-08452-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-023-08452-w