Skip to main content
Log in

Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead

  • Review Article
  • Published:
Network Modeling Analysis in Health Informatics and Bioinformatics Aims and scope Submit manuscript

Abstract

The integration of deep learning (DL) into co-clinical applications has generated substantial interest among researchers aiming to enhance clinical decision support systems for various aspects of disease management, including detection, prediction, diagnosis, treatment, and therapy. However, the inherent opacity of DL methods has raised concerns within the healthcare community, particularly in high-risk or complex medical domains. There exists a significant gap in research and understanding when it comes to elucidating and rendering transparent the inner workings of DL models applied to the analysis of medical images. While explainable artificial intelligence (XAI) has gained ground in diverse fields, including healthcare, numerous unexplored facets remain within the realm of medical imaging. To better understand the complexities of DL techniques, there is an urgent need for rapid advancement in the field of eXplainable DL (XDL) or eXplainable Artificial Intelligence (XAI). This would empower healthcare professionals to comprehend, assess, and contribute to decision-making processes before taking any actions. This viewpoint article conducts an extensive review of XAI and XDL, shedding light on methods for unveiling the “black-box” nature of DL. Additionally, it explores the adaptability of techniques originally designed for solving problems across diverse domains for addressing healthcare challenges. The article also delves into how physicians can interpret and comprehend data-driven technologies effectively. This comprehensive literature review serves as a valuable resource for scientists and medical practitioners, offering insights into both technical and clinical aspects. It assists in identifying methods to make XAI and XDL models more comprehensible, enabling wise model choices based on particular requirements and goals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

Not applicable.

References

  • Abeyagunasekera SHP, Perera Y, Chamara K, Kaushalya U, Sumathipala P, Senaweera O (2022) LISA: Enhance the explainability of medical images unifying current XAI techniques. In Proceedings of the 2022 IEEE 7th International Conference for Convergence in Technology (I2CT), Mumbai, India, 7–9 April 2022; pp. 1–9

  • Abir WH, Uddin MF, Khanam FR, Tazin T, Khan MM, Masud M, Aljahdali S (2022) Explainable AI in diagnosing and anticipating leukemia using transfer learning method. Comput Intell Neurosci. https://doi.org/10.1155/2022/5140148

    Article  Google Scholar 

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160

    Article  Google Scholar 

  • Alsinglawi B, Alshari O, Alorjani M, Mubin O, Alnajjar F, Novoa M, Darwish O (2022) An explainable machine learning framework for lung cancer hospital length of stay prediction. Sci Rep 12:607

    Article  Google Scholar 

  • Ancona M, Ceolini E, Öztireli C, Gross M (2017) “Towards better understanding of gradient-based attribution methods for deep neural networks.” arXiv preprint arXiv:1711.06104

  • Arun N, Gaw N, Singh P, Chang K, Aggarwal M, Chen B (2020) “Assessing the (Un) trustworthiness of saliency maps for localizing abnormalities in medical imaging. arXiv.” arXiv preprint arXiv:2008.02766

  • Bhandari M, Shahi TB, Siku B, Neupane A (2022) Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI. Comput Biol Med 150:106156

    Article  Google Scholar 

  • Böhle M, Eitel F, Weygandt M, Ritter K (2019) Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer’s disease classification. Front Aging Neurosci 11:194

    Article  Google Scholar 

  • Born J, Wiedemann N, Cossio M, Buhre C, Brändle G, Leidermann K, Goulet J, Aujayeb A, Moor M, Rieck B et al (2021) Accelerating detection of lung pathologies with explainable ultrasound image analysis. Appl Sci 11:672

    Article  Google Scholar 

  • Chakraborty S, Kumar K, Reddy BP, Meena T, Roy S (2023) An Explainable AI based Clinical Assistance Model for Identifying Patients with the Onset of Sepsis,” 2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI), Bellevue, WA, USA pp. 297–302. https://doi.org/10.1109/IRI58017.2023.00059

  • Clough JR, Oksuz I, Puyol-Antón E, Ruijsink B, King AP, Schnabel JA (2019) “Global and local interpretability for cardiac MRI classification.” In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part IV 22, pp. 656–664. Springer International Publishing

  • Codella NC, Lin CC, Halpern A, Hind M, Feris R, Smith JR (2018) Collaborative human-AI (CHAI): evidence-based interpretable melanoma classification in dermoscopic images. Understanding and interpreting machine learning in medical image computing applications. Springer, Cham, pp 97–105

    Chapter  Google Scholar 

  • Couteaux V, Nempont O, Pizaine G, Bloch I (2019) Towards interpretability of segmentation networks by analyzing DeepDreams. Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer, Cham, pp 56–63

    Chapter  Google Scholar 

  • Duell J, Fan X, Burnett B, Aarts G, Zhou SMA (2021) Comparison of Explanations Given by Explainable Artificial Intelligence Methods on Analysing Electronic Health Records. In Proceedings of the 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), Athens, Greece, 27–30 July 2021; pp. 1–4

  • Eitel F, Ritter K (2019) Alzheimer’s Disease Neuroimaging Initiative (ADNI). Testing the Robustness of Attribution Methods for Convolutional Neural Networks in MRI-Based Alzheimer’s Disease Classification. In Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, ML-CDS 2019, IMIMIC 2019; Lecture Notes in Computer Science; Suzuki, K., et al., Eds.; Springer: Cham, Switzerland, 2019; Volume 11797

  • Fan Z, Gong P, Tang S, Lee CU, Zhang X, Song P, Chen S, Li H (2022) Joint localization and classification of breast tumors on ultrasound images using a novel auxiliary attention-based framework. arXiv 2022. arXiv:2210.05762

  • Fong RC, Vedaldi A (2017) “Interpretable explanations of black boxes by meaningful perturbation.” In Proceedings of the IEEE international conference on computer vision, pp. 3429–3437

  • Fu X, Bi L, Kumar A, Fulham M, Kim J (2021) Multimodal spatial attention module for targeting multimodal PET-CT lung tumor segmentation. IEEE J Biomed Health Inf 25:3507–3516

    Article  Google Scholar 

  • Gao K, Shen H, Liu Y, Zeng L, Hu D (2019) “Dense-CAM: Visualize the Gender of Brains with MRI Images,” 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, pp. 1–7. https://doi.org/10.1109/IJCNN.2019.8852260

  • Ge Z, Wang B, Chang J, Yu Z, Zhou Z, Zhang J, Duan Z (2023) Using deep learning and explainable artificial intelligence to assess the severity of gastroesophageal reflux disease according to the Los Angeles Classification System. Scand J Gastroenterol. https://doi.org/10.1080/00365521.2022.2163185. (Epub ahead of print. PMID: 36625026)

    Article  Google Scholar 

  • Ghassemi M, Oakden-Rayner L, Beam AL (2021) The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 3:e745–e750

    Article  Google Scholar 

  • Giuste F, Shi W, Zhu Y, Naren T, Isgut M, Sha Y, Tong L, Gupte M, Wang MD (2022) Explainable artificial intelligence methods in combating pandemics: a systematic review. IEEE Reviews in Biomedical Engineering, vol. XX, no. X

  • Gozzi N, Giacomello E, Sollini M, Kirienko M, Ammirabile A, Lanzi P, Loiacono D, Chiti A (2022) Image embeddings extracted from CNNs outperform other transfer learning approaches in classification of chest radiographs. Diagnostics (basel) 12(9):2084. https://doi.org/10.3390/diagnostics12092084. (PMID:36140486;PMCID:PMC9497580)

    Article  Google Scholar 

  • Graziani M, Andrearczyk V, Müller H (2018) Regression concept vectors for bidirectional explanations in histopathology. Understanding and interpreting machine learning in medical image computing applications. Springer, Cham, pp 124–132

    Chapter  Google Scholar 

  • Haghanifar A, Majdabadi MM, Choi Y, Deivalakshmi S, Ko S (2022) COVID-cxnet: detecting COVID-19 in frontal chest X-ray images using deep learning. Multimed Tools Appl 81:30615–30645

    Article  Google Scholar 

  • Ho T-H, Park S-E, Xuanming Su (2021) A bayesian level-k model in n-person games. Manag Sci 67(3):1622–1638

    Article  Google Scholar 

  • Hu B, Vasu B, Hoogs A (2022) “X-MIR: EXplainable Medical Image Retrieval,” 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2022, pp. 1544–1554, doi: https://doi.org/10.1109/WACV51458.2022.00161

  • Jia G, Lam HK, Xu Y (2021) Classification of COVID-19 chest X-ray and CT images using a type of dynamic CNN modification method. Comput Biol Med 134:104425

    Article  Google Scholar 

  • Jiang H, Yang K, Gao M, Zhang D, Ma H, Qian W (2019) “An interpretable ensemble deep learning model for diabetic retinopathy disease classification.” In 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 2045–2048. IEEE

  • Jin W, Li X, Fatehi M, Hamarneh G (2023) Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks. MethodsX. https://doi.org/10.1016/j.mex.2023.102009

    Article  Google Scholar 

  • Jogani V, Purohit J, Shivhare I, Shrawne SC (2022) “Analysis of Explainable Artificial Intelligence Methods on Medical Image Classification.” arXiv preprint arXiv:2212.10565

  • Kabiraj A, Meena T, Reddy PB, Roy S (2022) “Detection and Classification of Lung Disease Using Deep Learning Architecture from X-ray Images.” In Advances in Visual Computing: 17th International Symposium, ISVC 2022, San Diego, CA, USA, October 3–5, 2022, Proceedings, Part I, pp. 444–455. Cham: Springer International Publishing

  • Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F, Sayres R (2017) Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). arXiv 2017. arXiv:1711.11279

  • Kim ST, Lee JH, Ro YM (2019) “Visual evidence for interpreting diagnostic decision of deep neural network in computer-aided diagnosis.” In Medical Imaging 2019: Computer-Aided Diagnosis, vol. 10950, pp. 139–147. SPIE

  • Kim D, Chung J, Choi J, Succi MD, Conklin J, Longo MGF, Ackman JB, Little BP, Petranovic M, Kalra MK, Lev MH, Do S (2022) Accurate auto-labeling of chest X-ray images based on quantitative similarity to an explainable AI model. Nat Commun 13(1):1867. https://doi.org/10.1038/s41467-022-29437-8. (PMID:35388010;PMCID:PMC8986787)

    Article  Google Scholar 

  • Kowsari K, Sali R, Ehsan L, Adorno W, Ali A, Moore S, Amadi B, Kelly P, Syed S, Brown D (2020) Hmic: Hierarchical medical image classification, a deep learning approach. Information 11(6):318

    Article  Google Scholar 

  • Kraus S, Schiavone F, Pluzhnikova A, Invernizzi AC (2021) Digital transformation in healthcare: analyzing the current state-of-research. J Bus Res 123:557–567

    Article  Google Scholar 

  • Lévy D, Jain A (2016) Breast mass classification from mammograms using deep convolutional neural networks. arXiv 2016. arXiv:1612.00542

  • Liao L et al. (2020) “Multi-branch deformable convolutional neural network with label distribution learning for fetal brain age prediction,” 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 2020, pp. 424-427. https://doi.org/10.1109/ISBI45749.2020.9098553.

  • Lin Z, Li S, Ni D, Liao Y, Wen H, Jie Du, Chen S, Wang T, Lei B (2019) Multi-task learning for quality assessment of fetal head ultrasound images. Med Image Anal 58:101548

    Article  Google Scholar 

  • Lu S, Zhu Z, Gorriz JM, Wang SH, Zhang YD (2022) NAGNN: Classification of COVID-19 based on neighboring aware representation from deep graph neural network. Int J Intell Syst 37:1572–1598

    Article  Google Scholar 

  • Lucieri A, Bajwa MN, Braun SA, Malik MI, Dengel A, Ahmed S (2022) ExAID: a multimodal explanation framework for computer-aided diagnosis of skin lesions. Comput Methods Programs Biomed 215:106620

    Article  Google Scholar 

  • Malhi A, Kampik T, Pannu H, Madhikermi M, Främling K (2019) “Explaining machine learning-based classifications of in-vivo gastral images.” In 2019 Digital Image Computing: Techniques and Applications (DICTA), pp. 1–7. IEEE

  • Meena T, Kabiraj A, Reddy PB, Roy S (2023) “Weakly Supervised Confidence Aware Probabilistic CAM multi-Thorax Anomaly Localization Network,” 2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI), Bellevue, WA, USA, pp. 309–314. https://doi.org/10.1109/IRI58017.2023.00061.

  • Meena T, Roy S (2022) Bone fracture detection using deep supervised learning from radiological images: a paradigm shift. Diagnostics 12(10):2420

    Article  Google Scholar 

  • Meena T, Sarawadekar K (2023) Seq2Dense U-Net: analyzing sequential inertial sensor data for human activity recognition using dense segmentation model. IEEE Sens J 23(18):21544–21552. https://doi.org/10.1109/JSEN.2023.3301187

    Article  Google Scholar 

  • Meswal H, Kumar D, Gupta A et al (2023) A weighted ensemble transfer learning approach for melanoma classification from skin lesion images. Multimed Tools Appl. https://doi.org/10.1007/s11042-023-16783-y

    Article  Google Scholar 

  • Miwa D, Duy VN, Takeuchi I (2023) “Valid P-value for deep learning-driven salient region.” arXiv preprint arXiv:2301.02437

  • Moncada-Torres A, van Maaren MC, Hendriks MP, Siesling S, Geleijnse G (2021) Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival. Sci Rep 11:6968

    Article  Google Scholar 

  • Montavon G, Lapuschkin S, Binder A, Samek W, Müller K-R (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn 65:211–222

    Article  Google Scholar 

  • Olah C, Mordvintsev A, Schubert L (2017) Feature visualization. Distill 2(11):e7

    Article  Google Scholar 

  • Pal D, Reddy PB, Roy S (2022) Attention UW-Net: a fully connected model for automatic segmentation and annotation of chest X-ray. Comput Biol Med 150:106083

    Article  Google Scholar 

  • Pal D, Meena T, Roy S (2023) “A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation,” 2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI), Bellevue, WA, USA, 2023, pp. 261–266, doi: https://doi.org/10.1109/IRI58017.2023.00052

  • Papanastasopoulos Z, Samala RK, Chan HP, Hadjiiski L, Paramagul C, Helvie MA, Neal CH (2020) Explainable AI for medical imaging: Deep-learning CNN ensemble for classification of estrogen receptor status from breast MRI. In Proceedings of the SPIE Medical Imaging 2020: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11314, p. 113140Z

  • Patrício C, Neves JC, Teixeira LF. Explainable deep learning methods in medical imaging diagnosis: a survey. arXiv:2205.04766v2. [eess.IV] 13 Jun 2022

  • Peng T, Boxberg M, Weichert W, Navab N, Marr C (2019) “Multi-task learning of a deep k-nearest neighbour network for histopathological image classification and retrieval.” In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part I 22, pp. 676–684. Springer International Publishing

  • Pereira S, Meier R, Alves V, Reyes M, Silva CA (2018) Automatic brain tumor grading from MRI data using convolutional neural networks and quality assessment. Understanding and interpreting machine learning in medical image computing applications. Springer, Cham, pp 106–114

    Chapter  Google Scholar 

  • Petsiuk V, Das A, Saenko K (2018) “Rise: randomized input sampling for explanation of black-box models.” arXiv preprint arXiv:1806.07421

  • Pierson E, Cutler DM, Leskovec J, Mullainathan S, Obermeyer Z (2021) An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat Med 27:136–140

    Article  Google Scholar 

  • Pisov M, Goncharov M, Kurochkina N, Morozov S, Gombolevsky V, Chernina V, Vladzymyrskyy A, Zamyatina K, Cheskova A, Pronin I et al (2019) Incorporating task-specific structural knowledge into CNNs for brain midline shift detection. Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer, Cham, pp 30–38

    Chapter  Google Scholar 

  • Punn NS, Agarwal S (2021) Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks. Appl Intell 51:2689–2702

    Article  Google Scholar 

  • Quellec G, Al Hajj H, Lamard M, Conze PH, Massin P, Cochener B (2021) ExplAIn: explanatory artificial intelligence for diabetic retinopathy diagnosis. Med Image Anal 72:102118

    Article  Google Scholar 

  • Rajpurkar P, Oconnell C, Schechter A, Asnani N, Li J, Kiani A, Ball RL et al (2020) CheXaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with HIV. NPJ Dig Med. https://doi.org/10.1038/s41746-020-00322-2

    Article  Google Scholar 

  • Ribeiro MT, Singh S, Guestrin C (2016) “Why should i trust you?” Explaining the predictions of any classifier.” In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144

  • Roy S, Bandyopadhyay SK (2013) “Abnormal regions detection and quantification with accuracy estimation from MRI of brain.” In 2013 2nd International Symposium on Instrumentation and Measurement, Sensor Network and Automation (IMSNA), pp. 611–615. IEEE

  • Roy S, Bandyopadhyay SK (2016) A new method of brain tissues segmentation from MRI with accuracy estimation. Procedia Comput Sci 85:362–369

    Article  Google Scholar 

  • Roy S, Shoghi KI (2019) “Computer-aided tumor segmentation from T2-weighted MR images of patient-derived tumor xenografts.” In Image Analysis and Recognition: 16th International Conference, ICIAR 2019, Waterloo, ON, Canada, August 27–29, 2019, Proceedings, Part II 16, pp. 159–171. Springer International Publishing

  • Roy S, Bhattacharyya D, Bandyopadhyay SK, Kim TH (2017a) An iterative implementation of level set for precise segmentation of brain tissues and abnormality detection from MR images. IETE J Res 63(6):769–783

    Article  Google Scholar 

  • Roy S, Bhattacharyya D, Bandyopadhyay SK, Kim TH (2017b) An effective method for computerized prediction and segmentation of multiple sclerosis lesions in brain MRI. Comput Methods Programs Biomed 140:307–320. https://doi.org/10.1016/j.cmpb.2017.01.003

    Article  Google Scholar 

  • Roy S, Bhattacharyya D, Bandyopadhyay SK, Kim TH (2018) Heterogeneity of human brain tumor with lesion identification, localization, and analysis from MRI. Inform Med Unlocked 13:139–150

    Article  Google Scholar 

  • Roy S, Whitehead TD, Quirk JD, Salter A, Ademuyiwa FO, Li S, An H, Shoghi KI (2020) Optimal co-clinical radiomics: Sensitivity of radiomic features to tumour volume, image noise and resolution in co-clinical T1-weighted and T2-weighted magnetic resonance imaging. EBioMedicine 59:102963

    Article  Google Scholar 

  • Roy S, Meena T, Lim SJ (2022) Demystifying supervised learning in healthcare 4.0: a new reality of transforming diagnostic medicine. Diagnostics 12(10):2549

    Article  Google Scholar 

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215

    Article  Google Scholar 

  • Samek W, Müller KR (2019) “Towards explainable artificial intelligence”. Explainable AI: interpreting, explaining and visualizing deep learning. Springer, pp 5–22

    Google Scholar 

  • Schlemper Jo, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D (2019) Attention gated networks: learning to leverage salient regions in medical images. Med Image Anal 53:197–207

    Article  Google Scholar 

  • Schwab E, Gooßen A, Deshpande H, Saalbach A (2020) “Localization of critical findings in chest X-ray without local annotations using multi-instance learning.” In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1879–1882. IEEE

  • Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) “Grad-cam: Visual explanations from deep networks via gradient-based localization.” In Proceedings of the IEEE international conference on computer vision, pp. 618–626

  • Seo D, Kanghan Oh, Il-Seok Oh (2019) Regional multi-scale approach for visually pleasing explanations of deep neural networks. IEEE Access 8:8572–8582

    Article  Google Scholar 

  • Shen Y, Wu N, Phang J, Park J, Liu K, Tyagi S, Heacock L, Kim SG, Moy L, Cho K et al (2021) An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med Image Anal 68:101908

    Article  Google Scholar 

  • Shibu CJ, Sreedharan S, Arun KM, Kesavadas C, Sitaram R (2023) Explainable artificial intelligence model to predict brain states from fNIRS signals. Front Hum Neurosci 19(16):1029784. https://doi.org/10.3389/fnhum.2022.1029784. (PMID:36741783;PMCID:PMC9892761)

    Article  Google Scholar 

  • Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Voume 70, pp. 3145–3153

  • Silva W, Fernandes K, Cardoso MJ, Cardoso JS (2018) Towards complementary explanations using deep neural networks. Understanding and interpreting machine learning in medical image computing applications. Springer, Cham, pp 133–140

    Chapter  Google Scholar 

  • Singh S, Karimi S, Ho-Shon K, Hamey L (2019.) “From chest x-rays to radiology reports: a multimodal machine learning approach.” In 2019 Digital Image Computing: Techniques and Applications (DICTA), pp. 1–8. IEEE

  • Singh A, Sengupta S, Lakshminarayanan V (2020) Explainable deep learning models in medical image analysis. J Imaging 6(6):52. https://doi.org/10.3390/jimaging6060052. (PMID:34460598;PMCID:PMC8321083)

    Article  Google Scholar 

  • Song Y, Zheng S, Li L, Zhang X, Zhang X, Huang Z, Chen J, Wang R, Zhao H, Chong Y et al (2021) Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans Comput Biol Bioinform 18:2775–2780

    Article  Google Scholar 

  • Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M (2014) “Striving for simplicity: The all convolutional net.” arXiv preprint arXiv:1412.6806

  • Stano M, Benesova W, Martak LS (2019) Explainable 3D convolutional neural network using GMM encoding. In Proceedings of the Twelfth International Conference on Machine Vision, Amsterdam, The Netherlands, 16–18 November 2019; Volume 11433, p. 114331U.

  • Sun J, Darbeha F, Zaidi M, Wang B (2020) SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation. arXiv 2020. arXiv:2001.07645

  • Van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA (2022) Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 79:102470

    Article  Google Scholar 

  • Van Molle P, Der Strooper M, Verbelen T, Vankeirsbilck B, Simoens P, Dhoedt B (2018) Visualizing convolutional neural networks to improve decision support for skin lesion classification. Understanding and interpreting machine learning in medical image computing applications. Springer, Cham, pp 115–123

    Chapter  Google Scholar 

  • Wang L, Wong A (2020) COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images. arXiv 2020. arXiv:2003.09871

  • Wang Z, Zhu H, Ma Y, Basu A (2021) “XAI Feature Detector for Ultrasound Feature Matching,” 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, 2021, pp. 2928–2931, https://doi.org/10.1109/EMBC46164.2021.9629944

  • Wang SH, Govindaraj VV, Górriz JM, Zhang X, Zhang YD (2021b) COVID-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network. Inf Fusion 67:208–229

    Article  Google Scholar 

  • Wang Z, Xiao Y, Li Y, Zhang J, Lu F, Hou M, Liu X (2021c) Automatically discriminating and localizing COVID-19 from community-acquired pneumonia on chest X-rays. Pattern Recognit 110:107613

    Article  Google Scholar 

  • Wang H, Wang S, Qin Z, Zhang Y, Li R, Xia Y (2021d) Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med Image Anal 67:101846

    Article  Google Scholar 

  • Windisch P, Weber P, Fürweger C, Ehret F, Kufeld M, Zwahlen D, Muacevic A (2020) Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices. Neuroradiology 62:1515–1518

    Article  Google Scholar 

  • Windisch P et al (2020) Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices. Neuroradiology 62(11):1515–1518. https://doi.org/10.1007/s00234-020-02465-1

    Article  Google Scholar 

  • Wu YH, Gao SH, Mei J, Xu J, Fan DP, Zhang RG, Cheng MM (2021) JCS: an explainable covid-19 diagnosis system by joint classification and segmentation. IEEE Trans Image Process 30:3113–3126

    Article  Google Scholar 

  • Xie B, Lei T, Wang N et al (2020) Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int J CARS 15:1303–1312. https://doi.org/10.1007/s11548-020-02182-3

    Article  Google Scholar 

  • Xing H, Xiao Z, Zhan D, Luo S, Dai P, Li K (2022a) SelfMatch: robust semisupervised time-series classification with self-distillation. Int J Intell Syst 37(11):8583–8610

    Article  Google Scholar 

  • Xing H, Xiao Z, Rong Qu, Zhu Z, Zhao B (2022b) An efficient federated distillation learning system for multitask time series classification. IEEE Trans Instrum Meas. https://doi.org/10.1109/TIM.2022.3201203,71,(1-12)

    Article  Google Scholar 

  • Xu F, Jiang L, He W, Huang G, Hong Y, Tang F, Lv J, Lin Y, Qin Y, Lan R, Pan X, Zeng S, Li M, Chen Q, Tang N (2021) The clinical value of explainable deep learning for diagnosing fungal keratitis using in vivo confocal microscopy images. Front Med (lausanne) 14(8):797616. https://doi.org/10.3389/fmed.2021.797616. (PMID:34970572;PMCID:PMC8712475)

    Article  Google Scholar 

  • Yamashita R, Nishio M, Do RKG et al (2018) Convolutional neural networks: an overview and application in radiology. Insights Imaging 9:611–629. https://doi.org/10.1007/s13244-018-0639-9

    Article  Google Scholar 

  • Yang HL, Kim JJ, Kim JH, Kang YK, Park DH, Park HS, Kim HK, Kim MS (2019) Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PLoS ONE 14:e0215076

    Article  Google Scholar 

  • Yang G, Ye Q, Xia J (2021) “Unbox the Black-box for the medical explainable AI via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond”. arXiv:2102.01998v1 [cs.AI] 3 Feb 2021

  • Yeche H, Harrison J, Berthier T (2019) UBS: a dimension-agnostic metric for concept vector interpretability applied to radiomics. Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer, Cham, pp 12–20

    Chapter  Google Scholar 

  • Young K, Booth G, Simpson B, Dutton R, Shrapnel S (2019) Deep neural network or dermatologist? Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer, Cham, pp 48–55

    Chapter  Google Scholar 

  • Zeiler MD, Fergus R (2014) “Visualizing and understanding convolutional networks.” In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part I 13, pp. 818–833. Springer International Publishing

  • Zeineldin RA, Karar ME, Elshaer Z et al (2022a) Explainability of deep neural networks for MRI analysis of brain tumors. Int J CARS 17:1673–1683. https://doi.org/10.1007/s11548-022-02619-x

    Article  Google Scholar 

  • Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F (2022b) Explainability of deep neural networks for MRI analysis of brain tumors. Int J Comput Assisted Radiol Surg 17(9):1673–1683

    Article  Google Scholar 

  • Zhang Z, Xie Y, Xing F, McGough M, Yang L (2017) Mdnet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6428–6436

  • Zhu P, Ogino M (2019) Guideline-based additive explanation for computer-aided diagnosis of lung nodules. Interpretability of machine intelligence in medical image computing and multimodal learning for clinical decision support. Springer, Cham, pp 39–47

    Chapter  Google Scholar 

  • Zintgraf LM, Cohen TS, Adel T, Welling M (2017) “Visualizing deep neural network decisions: Prediction difference analysis.” arXiv preprint arXiv:1702.04595

Download references

Acknowledgements

This research work was supported by the RFIER—Jio Institute research “Computer Vision in Medical Imaging (CVMI)” project under the “AI for All” research center.”

Funding

The work is supported by the RFIER—Jio Institute research grant # 2022/33185004.

Author information

Authors and Affiliations

Authors

Contributions

SR wrote the main manuscript. TM and DP revised and added many sections of the manuscript. All authors prepared Figures and Tables. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Sudipta Roy or Tanushree Meena.

Ethics declarations

Conflict of interest

Authors have no conflict to declare.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Roy, S., Pal, D. & Meena, T. Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead. Netw Model Anal Health Inform Bioinforma 13, 4 (2024). https://doi.org/10.1007/s13721-023-00437-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13721-023-00437-y

Keywords

Navigation