Abstract
Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are “black-box” structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized.








Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Otter, D.W., Medina, J.R., Kalita, J.K.: A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 604–624 (2020). https://doi.org/10.1109/TNNLS.2020.2979670
Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning-based text classification: a comprehensive review. ACM Comput. Surv. 54(3), 1–40 (2021). https://doi.org/10.1145/3439726
Chrysostomou, G., Aletras, N.: Improving the faithfulness of attention-based explanations with task-specific information for text classification (2021). at preprint arxiv:2105.02657
Schwartz, E., Giryes, R., Bronstein, A.M.: Deepisp: Toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28(2), 912–923 (2018). https://doi.org/10.1109/TIP.2018.2872858
Sun, J., Darbehani, F., Zaidi, M., Wang, B.: Saunet: Shape attentive u-net for interpretable medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 797–806 (2020) https://doi.org/10.1007/978-3-030-59719-1_77.Springer
Tian, C., Xu, Y., Zuo, W.: Image denoising using deep CNN with batch renormalization. Neural Netw. 121, 461–473 (2020). https://doi.org/10.1016/j.neunet.2019.08.022
Martinez-Murcia, F.J., Ortiz, A., Ramírez, J., Górriz, J.M., Cruz, R.: Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 452, 424–434 (2021). https://doi.org/10.1016/j.neucom.2020.04.148
Sun, M., Huang, Z., Guo, C.: Automatic diagnosis of alzheimer’s disease and mild cognitive impairment based on cnn+ svm networks with end-to-end training. In: 2021 13th International Conference on Advanced Computational Intelligence (ICACI), pp. 279–285 (2021) https://doi.org/10.1109/ICACI52617.2021.9435894. IEEE
Goel, T., Murugan, R., Mirjalili, S., Chakrabartty, D.K.: Optconet: an optimized convolutional neural network for an automatic diagnosis of covid-19. Appl. Intell. 51(3), 1351–1366 (2021)
Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (xai) program. AI Magazine 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850
Singh, A., Sengupta, S., Lakshminarayanan, V.: Explainable deep learning models in medical image analysis. J. Imaging 6(6), 52 (2020). https://doi.org/10.3390/jimaging6060052
Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): toward medical xai. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2020). https://doi.org/10.1109/TNNLS.2020.3027314
Messalas, A., Kanellopoulos, Y., Makris, C.: Model-agnostic interpretability with shapley values. In: 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA), pp. 1–7 (2019). https://doi.org/10.1109/IISA.2019.8900669. IEEE
Da Cruz, H.F., Pfahringer, B., Martensen, T., Schneider, F., Meyer, A., Böttinger, E., Schapranow, M.-P.: Using interpretability approaches to update black-box clinical prediction models: an external validation study in nephrology. Artif. Intell. Med. 111, 101982 (2021). https://doi.org/10.1016/j.artmed.2020.101982
Pedapati, T., Balakrishnan, A., Shanmugam, K., Dhurandhar, A.: Learning global transparent models consistent with local contrastive explanations. Adv. Neural. Inf. Process. Syst. 33, 3592–3602 (2020)
Moraffah, R., Karami, M., Guo, R., Raglin, A., Liu, H.: Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor. Newslett. 22(1), 18–33 (2020). https://doi.org/10.1145/3400051.3400058
Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116(44), 22071–22080 (2019). https://doi.org/10.1073/pnas.1900654116
Reyes, M., Meier, R., Pereira, S., Silva, C.A., Dahlweid, F.-M., Tengg-Kobligk, H.V., Summers, R.M., Wiest, R.: On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology 2(3), 190043 (2020). https://doi.org/10.1148/ryai.2020190043
Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: A survey. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 55–76. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_4
Qin, Z., Yu, F., Liu, C., Chen, X.: How convolutional neural network see the world-a survey of convolutional neural network visualization methods (2018). at print
Yuan, H., Chen, Y., Hu, X., Ji, S.: Interpreting deep models for text analysis via optimization and regularization methods. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5717–5724 (2019) https://doi.org/10.1609/aaai.v33i01.33015717
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable ai: A brief survey on history, research areas, approaches and challenges. In: CCF International Conference on Natural Language Processing and Chinese Computing, pp. 563–574 (2019) https://doi.org/10.1007/978-3-030-32236-6_51. Springer
Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.-R.: Unmasking clever hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1–8 (2019)
Kohlbrenner, M., Bauer, A., Nakajima, S., Binder, A., Samek, W., Lapuschkin, S.: Towards best practice in explaining neural network decisions with lrp. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2020). https://doi.org/10.1109/IJCNN48605.2020.9206975. IEEE
Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) https://doi.org/10.1007/978-3-030-28954-6_10
Gu, J., Yang, Y., Tresp, V.: Understanding individual decisions of cnns via contrastive backpropagation. In: Asian Conference on Computer Vision, pp. 119–134 (2018) https://doi.org/10.1007/978-3-030-20893-6_8. Springer
Lee, J.R., Kim, S., Park, I., Eo, T., Hwang, D.: Relevance-cam: Your model already knows where to look. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14944–14953 (2021)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153 (2017). PMLR
Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016). at reprint
Ras, G., Xie, N., van Gerven, M., Doran, D.: Explainable deep learning: a field guide for the uninitiated. J. Artif. Intell. Res. 73, 329–397 (2022). https://doi.org/10.1613/jair.1.13200
Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., Hu, X.: Score-cam: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 24–25 (2020)
Ramaswamy, H.G.: Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 983–991 (2020)
Zhang, Q., Rao, L., Yang, Y.: Group-cam: Group score-weighted visual explanations for deep convolutional networks. arXiv preprint arXiv:2103.13859 (2021). at reprint
Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847 (2018). IEEE
Joshi, A., Mishra, G., Sivaswamy, J.: Explainable disease classification via weakly-supervised segmentation. In: Interpretable and Annotation-Efficient Learning for Medical Image Computing, pp. 54–62 (2020). https://doi.org/10.1007/978-3-030-61166-8_6
Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.-R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021). https://doi.org/10.1109/JPROC.2021.3060483
Pintelas, E., Livieris, I.E., Pintelas, P.: A grey-box ensemble model exploiting black-box accuracy and white-box intrinsic interpretability. Algorithms 13(1), 17 (2020). https://doi.org/10.3390/a13010017
Mi, J.-X., Li, A.-D., Zhou, L.-F.: Review study of interpretation methods for future interpretable machine learning. IEEE Access 8, 191969–191985 (2020). https://doi.org/10.1109/ACCESS.2020.3032756
Wang, J., Gou, L., Zhang, W., Yang, H., Shen, H.-W.: Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation. IEEE Trans. Vis. Comput. Graph. 25(6), 2168–2180 (2019). https://doi.org/10.1109/TVCG.2019.2903943
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)
Wang, L., Yoon, K.-J.: Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell. (2021). https://doi.org/10.1109/TPAMI.2021.3055564
Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019). https://doi.org/10.1145/3359786
Mohankumar, A.K., Nema, P., Narasimhan, S., Khapra, M.M., Srinivasan, B.V., Ravindran, B.: Towards transparent and explainable attention models. arXiv preprint arXiv:2004.14243 (2020). at reprint
Serrano, S., Smith, N.A.: Is attention interpretable? arXiv preprint arXiv:1906.03731 (2019). at reprint
Jain, S., Wallace, B.C.: Attention is not explanation. arXiv preprint arXiv:1902.10186 (2019). at reprint
Wiegreffe, S., Pinter, Y.: Attention is not not explanation. arXiv preprint arXiv:1908.04626 (2019) at reprint
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R.: Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fus. 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
Margot, V., Luta, G.: A new method to compare the interpretability of rule-based algorithms. AI 2(4), 621–635 (2021). https://doi.org/10.3390/ai2040037
Kind, A., Azzopardi, G.: An explainable ai-based computer aided detection system for diabetic retinopathy using retinal fundus images. In: International Conference on Computer Analysis of Images and Patterns, pp. 457–468 (2019) https://doi.org/10.1007/978-3-030-29888-3_37. Springer
de La Torre, J., Valls, A., Puig, D.: A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing 396, 465–476 (2020). https://doi.org/10.1016/j.neucom.2018.07.102
Kumar, D., Taylor, G.W., Wong, A.: Discovery radiomics with clear-dr: interpretable computer aided diagnosis of diabetic retinopathy. IEEE Access 7, 25891–25896 (2019). https://doi.org/10.1109/ACCESS.2019.2893635
Jiang, H., Yang, K., Gao, M., Zhang, D., Ma, H., Qian, W.: An interpretable ensemble deep learning model for diabetic retinopathy disease classification. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2045–2048 (2019) https://doi.org/10.1109/EMBC.2019.8857160. IEEE
Jiang, H., Xu, J., Shi, R., Yang, K., Zhang, D., Gao, M., Ma, H., Qian, W.: A multi-label deep learning model with interpretable grad-cam for diabetic retinopathy classification. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1560–1563 (2020) https://doi.org/10.1109/EMBC44109.2020.9175884. IEEE
Chetoui, M., Akhloufi, M.A.: Explainable diabetic retinopathy using efficientnet. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1966–1969 (2020). https://doi.org/10.1109/EMBC44109.2020.9175664. IEEE
Li, L., Xu, M., Wang, X., Jiang, L., Liu, H.: Attention based glaucoma detection: a large-scale database and cnn model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10571–10580 (2019)
Liao, W., Zou, B., Zhao, R., Chen, Y., He, Z., Zhou, M.: Clinical interpretable deep learning model for glaucoma diagnosis. IEEE J. Biomed. Health Inform. 24(5), 1405–1412 (2019). https://doi.org/10.1109/JBHI.2019.2949075
Mojab, N., Noroozi, V., Philip, S.Y., Hallak, J.A.: Deep multi-task learning for interpretable glaucoma detection. In: 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI), pp. 167–174 (2019). https://doi.org/10.1109/IRI.2019.00037. IEEE
Fang, L., Wang, C., Li, S., Rabbani, H., Chen, X., Liu, Z.: Attention to lesion: Lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans. Med. Imaging 38(8), 1959–1970 (2019). https://doi.org/10.1109/TMI.2019.2898414
Liu, J., Zhao, G., Fei, Y., Zhang, M., Wang, Y., Yu, Y.: Align, attend and locate: Chest x-ray diagnosis via contrast induced attention network with limited supervision. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10632–10641 (2019)
Guan, Q., Huang, Y.: Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recogn. Lett. 130, 259–266 (2020). https://doi.org/10.1016/j.patrec.2018.10.027
Huang, Z., Fu, D.: Diagnose chest pathology in x-ray images by learning multi-attention convolutional neural network. In: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 294–299 (2019). https://doi.org/10.1109/ITAIC.2019.8785431. IEEE
Zhang, X., Chen, T.: Attention u-net for interpretable classification on chest x-ray image. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 901–908 (2020) https://doi.org/10.1109/BIBM49941.2020.9313354 IEEE
Li, Y., Gu, D., Wen, Z., Jiang, F., Liu, S.: Classify and explain: An interpretable convolutional neural network for lung cancer diagnosis. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1065–1069 (2020) https://doi.org/10.1109/ICASSP40776.2020.9054605 IEEE
Kumar, D., Sankar, V., Clausi, D., Taylor, G.W., Wong, A.: Sisc: End-to-end interpretable discovery radiomics-driven lung cancer prediction via stacked interpretable sequencing cells. IEEE Access 7, 145444–145454 (2019). https://doi.org/10.1109/ACCESS.2019.2945524
Shen, S., Han, S.X., Aberle, D.R., Bui, A.A., Hsu, W.: An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst. Appl. 128, 84–95 (2019). https://doi.org/10.1016/j.eswa.2019.01.048
Shen, S., Han, S.X., Aberle, D.R., Bui, A.A., Hsu, W.: Explainable hierarchical semantic convolutional neural network for lung cancer diagnosis. In: CVPR Workshops, pp. 63–66 (2019)
Jiang, H., Shen, F., Gao, F., Han, W.: Learning efficient, explainable and discriminative representations for pulmonary nodules classification. Pattern Recogn. 113, 107825 (2021). https://doi.org/10.1016/j.patcog.2021.107825
Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)
Ramchandani, A., Fan, C., Mostafavi, A.: Deepcovidnet: An interpretable deep learning model for predictive surveillance of covid-19 using heterogeneous features and their interactions. IEEE Access 8, 159915–159930 (2020). https://doi.org/10.1109/ACCESS.2020.3019989
Casiraghi, E., Malchiodi, D., Trucco, G., Frasca, M., Cappelletti, L., Fontana, T., Esposito, A.A., Avola, E., Jachetti, A., Reese, J.: Explainable machine learning for early assessment of covid-19 risk prediction in emergency departments. IEEE Access 8, 196299–196325 (2020). https://doi.org/10.1109/ACCESS.2020.3034032
Shi, W., Tong, L., Zhuang, Y., Zhu, Y., Wang, M.D.: Exam: an explainable attention-based model for covid-19 automatic diagnosis. In: Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, pp. 1–6 (2020)
Wu, Y.-H., Gao, S.-H., Mei, J., Xu, J., Fan, D.-P., Zhang, R.-G., Cheng, M.-M.: Jcs: an explainable COVID-19 diagnosis system by joint classification and segmentation. IEEE Trans. Image Process. 30, 3113–3126 (2021). https://doi.org/10.1109/TIP.2021.3058783
Brunese, L., Mercaldo, F., Reginelli, A., Santone, A.: Explainable deep learning for pulmonary disease and coronavirus covid-19 detection from x-rays. Comput. Methods Programs Biomed. 196, 105608 (2020). https://doi.org/10.1016/j.cmpb.2020.105608
Singh, R.K., Pandey, R., Babu, R.N.: Covidscreen: explainable deep learning framework for differential diagnosis of covid-19 using chest x-rays. Neural Comput. Appl. 33(14), 8871–8892 (2021)
Karim, M.R., Döhmen, T., Cochez, M., Beyan, O., Rebholz-Schuhmann, D., Decker, S.: Deepcovidexplainer: explainable covid-19 diagnosis from chest x-ray images. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1034–1037 (2020) https://doi.org/10.1109/BIBM49941.2020.9313304.IEEE
Alshazly, H., Linse, C., Barth, E., Martinetz, T.: Explainable covid-19 detection using chest CT scans and deep learning. Sensors 21(2), 455 (2021). https://doi.org/10.3390/s21020455
Tang, Z., Chuang, K.V., DeCarli, C., Jin, L.-W., Beckett, L., Keiser, M.J., Dugger, B.N.: Interpretable classification of Alzheimer’s disease pathologies with a convolutional neural network pipeline. Nat. Commun. 10(1), 1–14 (2019)
Nigri, E., Ziviani, N., Cappabianco, F., Antunes, A., Veloso, A.: Explainable deep cnns for mri-based diagnosis of alzheimer’s disease. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2020) IEEE
Wang, N., Chen, M., Subbalakshmi, K.P.: Explainable cnn-attention networks (c-attention network) for automated detection of alzheimer’s disease. arXiv preprint arXiv:2006.14135 (2020) at reprint
El-Sappagh, S., Alonso, J.M., Islam, S., Sultan, A.M., Kwak, K.S.: A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 11(1), 1–26 (2021)
Zeng, Z., Shen, Z., Tan, B.T.H., Chin, J.J., Leung, C., Wang, Y., Chi, Y., Miao, C.: Explainable and argumentation-based decision making with qualitative preferences for diagnostics and prognostics of alzheimer’s disease. In: Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 17, pp. 816–826 (2020)
Achilleos, K.G., Leandrou, S., Prentzas, N., Kyriacou, P.A., Kakas, A.C., Pattichis, C.S.: Extracting explainable assessments of alzheimer’s disease via machine learning on brain mri imaging data. In: 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), pp. 1036–1041 (2020) https://doi.org/10.1109/BIBE50027.2020.00175. IEEE
Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of parkinson’s disease using lime on datscan imagery. Comput. Biol. Med. 126, 104041 (2020). https://doi.org/10.1016/j.compbiomed.2020.104041
Cavaliere, F., Della Cioppa, A., Marcelli, A., Parziale, A., Senatore, R.: Parkinson’s disease diagnosis: towards grammar-based explainable artificial intelligence. In: 2020 IEEE Symposium on Computers and Communications (ISCC), pp. 1–6 (2020) https://doi.org/10.1109/ISCC50000.2020.9219616. IEEE
Van Steenkiste, T., Deschrijver, D., Dhaene, T.: Interpretable ecg beat embedding using disentangled variational auto-encoders. In: 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), pp. 373–378 (2019) https://doi.org/10.1109/CBMS.2019.00081. IEEE
Mousavi, S., Afghah, F., Acharya, U.R.: Han-ecg: an interpretable atrial fibrillation detection model using hierarchical attention networks. Comput. Biol. Med. 127, 104057 (2020). https://doi.org/10.1016/j.compbiomed.2020.104057
Clough, J.R., Oksuz, I., Puyol-Antón, E., Ruijsink, B., King, A.P., Schnabel, J.A.: Global and local interpretability for cardiac MRI classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 656–664 (2019) https://doi.org/10.1007/978-3-030-32251-9_72.Springer
Puyol-Antón, E., Chen, C., Clough, J.R., Ruijsink, B., Sidhu, B.S., Gould, J., Porter, B., Elliott, M., Mehta, V., Rueckert, D.: Interpretable deep models for cardiac resynchronisation therapy response prediction. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 284–293 (2020) https://doi.org/10.1007/978-3-030-59710-8_28.Springer
Aghamohammadi, M., Madan, M., Hong, J.K., Watson, I.: Predicting heart attack through explainable artificial intelligence. In: International Conference on Computational Science, pp. 633–645 (2019) https://doi.org/10.1007/978-3-030-22741-8_45. Springer
Jones, O.T., Ranmuthu, C.K., Hall, P.N., Funston, G., Walter, F.M.: Recognising skin cancer in primary care. Adv. Ther. 37(1), 603–616 (2020)
Barata, C., Celebi, M.E., Marques, J.S.: Explainable skin lesion diagnosis using taxonomies. Pattern Recogn. 110, 107413 (2021). https://doi.org/10.1016/j.patcog.2020.107413
Nguyen, D.M.H., Ezema, A., Nunnari, F., Sonntag, D.: A visually explainable learning system for skin lesion detection using multiscale input with attention u-net. In: German Conference on Artificial Intelligence (Künstliche Intelligenz), pp. 313–319 (2020) https://doi.org/10.1007/978-3-030-58285-2_28 Springer
Jiang, S., Li, H., Jin, Z.: A visually interpretable deep learning framework for histopathological image-based skin cancer diagnosis. IEEE J. Biomed. Health Inform. 25(5), 1483–1494 (2021). https://doi.org/10.1109/JBHI.2021.3052044
Gu, R., Wang, G., Song, T., Huang, R., Aertsen, M., Deprest, J., Ourselin, S., Vercauteren, T., Zhang, S.: Ca-net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans. Med. Imaging 40(2), 699–711 (2020). https://doi.org/10.1109/TMI.2020.3035253
Stieler, F., Rabe, F., Bauer, B.: Towards domain-specific explainable ai: model interpretation of a skin image classifier using a human approach. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1802–1809 (2021)
van der Velden, B.H., Ragusi, M.A., Janse, M.H., Loo, C.E., Gilhuijs, K.G.: Interpretable deep learning regression for breast density estimation on MRI. In: Medical Imaging 2020: Computer-Aided Diagnosis, vol. 11314, p. 1131412 (2020) International Society for Optics and Photonics. https://doi.org/10.1117/12.2549003
Shen, Y., Wu, N., Phang, J., Park, J., Liu, K., Tyagi, S., Heacock, L., Kim, S.G., Moy, L., Cho, K.: An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med. Image Anal. 68, 101908 (2021). https://doi.org/10.1016/j.media.2020.101908
Sabol, P., Sinčák, P., Ogawa, K., Hartono, P.: explainable classifier supporting decision-making for breast cancer diagnosis from histopathological images. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019) https://doi.org/10.1109/IJCNN.2019.8852070.IEEE
Beykikhoshk, A., Quinn, T.P., Lee, S.C., Tran, T., Venkatesh, S.: Deeptriage: interpretable and individualised biomarker scores using attention mechanism for the classification of breast cancer sub-types. BMC Med. Genomics 13(3), 1–10 (2020)
Gu, D., Su, K., Zhao, H.: A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif. Intell. Med. 107, 101858 (2020). https://doi.org/10.1016/j.artmed.2020.101858
Thomas, S.M., Lefevre, J.G., Baxter, G., Hamilton, N.A.: Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer. Med. Image Anal. 68, 101915 (2021). https://doi.org/10.1016/j.media.2020.101915
Li, H., Zhou, J., Zhou, Y., Chen, J., Gao, F., Xu, Y., Gao, X.: Automatic and interpretable model for periodontitis diagnosis in panoramic radiographs. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 454–463 (2020) https://doi.org/10.1007/978-3-030-59713-9_44.Springer
Vasquez-Morales, G.R., Martinez-Monterrubio, S.M., Moreno-Ger, P., Recio-Garcia, J.A.: Explainable prediction of chronic renal disease in the Colombian population using neural networks and case-based reasoning. IEEE Access 7, 152900–152910 (2019). https://doi.org/10.1109/ACCESS.2019.2948430
Penafiel, S., Baloian, N., Sanson, H., Pino, J.A.: Predicting stroke risk with an interpretable classifier. IEEE Access 9, 1154–1166 (2020). https://doi.org/10.1109/ACCESS.2020.3047195
Chary, M., Boyer, E.W., Burns, M.M.: Diagnosis of acute poisoning using explainable artificial intelligence. Comput. Biol. Med. 134, 104469 (2021). https://doi.org/10.1016/j.compbiomed.2021.104469
Liu, F., Wu, X., Ge, S., Fan, W., Zou, Y.: Exploring and distilling posterior and prior knowledge for radiology report generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13753–13762 (2021)
Kwon, B.C., Choi, M.-J., Kim, J.T., Choi, E., Kim, Y.B., Kwon, S., Sun, J., Choo, J.: Retainvis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans. Visual Comput. Graphics 25(1), 299–309 (2018). https://doi.org/10.1109/TVCG.2018.2865027
Lucieri, A., Bajwa, M.N., Dengel, A., Ahmed, S.: Achievements and challenges in explaining deep learning based computer-aided diagnosis systems. arXiv preprint arXiv:2011.13169 (2020) at reprint
Acknowledgements
This work was supported by the National Natural Science Foundation of China (61976106, 61772242); China Postdoctoral Science Foundation (2017M611737); Six talent peaks project in Jiangsu Province, China (DZXX-122); Key special projects of health and family planning science and technology in Zhenjiang City, China (SHW2017019); Innovation capacity building Foundation of Jilin Provincial Development and Reform Commission (2021C038-7).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Communicated by J. Gao.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Teng, Q., Liu, Z., Song, Y. et al. A survey on the interpretability of deep learning in medical diagnosis. Multimedia Systems 28, 2335–2355 (2022). https://doi.org/10.1007/s00530-022-00960-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00530-022-00960-4