Skip to main content

Advertisement

Log in

A survey on the interpretability of deep learning in medical diagnosis

  • Regular Article
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are “black-box” structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Otter, D.W., Medina, J.R., Kalita, J.K.: A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 604–624 (2020). https://doi.org/10.1109/TNNLS.2020.2979670

    Article  MathSciNet  Google Scholar 

  2. Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning-based text classification: a comprehensive review. ACM Comput. Surv. 54(3), 1–40 (2021). https://doi.org/10.1145/3439726

    Article  Google Scholar 

  3. Chrysostomou, G., Aletras, N.: Improving the faithfulness of attention-based explanations with task-specific information for text classification (2021). at preprint arxiv:2105.02657

  4. Schwartz, E., Giryes, R., Bronstein, A.M.: Deepisp: Toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28(2), 912–923 (2018). https://doi.org/10.1109/TIP.2018.2872858

    Article  MathSciNet  MATH  Google Scholar 

  5. Sun, J., Darbehani, F., Zaidi, M., Wang, B.: Saunet: Shape attentive u-net for interpretable medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 797–806 (2020) https://doi.org/10.1007/978-3-030-59719-1_77.Springer

  6. Tian, C., Xu, Y., Zuo, W.: Image denoising using deep CNN with batch renormalization. Neural Netw. 121, 461–473 (2020). https://doi.org/10.1016/j.neunet.2019.08.022

    Article  Google Scholar 

  7. Martinez-Murcia, F.J., Ortiz, A., Ramírez, J., Górriz, J.M., Cruz, R.: Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 452, 424–434 (2021). https://doi.org/10.1016/j.neucom.2020.04.148

    Article  Google Scholar 

  8. Sun, M., Huang, Z., Guo, C.: Automatic diagnosis of alzheimer’s disease and mild cognitive impairment based on cnn+ svm networks with end-to-end training. In: 2021 13th International Conference on Advanced Computational Intelligence (ICACI), pp. 279–285 (2021) https://doi.org/10.1109/ICACI52617.2021.9435894. IEEE

  9. Goel, T., Murugan, R., Mirjalili, S., Chakrabartty, D.K.: Optconet: an optimized convolutional neural network for an automatic diagnosis of covid-19. Appl. Intell. 51(3), 1351–1366 (2021)

    Article  Google Scholar 

  10. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (xai) program. AI Magazine 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  11. Singh, A., Sengupta, S., Lakshminarayanan, V.: Explainable deep learning models in medical image analysis. J. Imaging 6(6), 52 (2020). https://doi.org/10.3390/jimaging6060052

    Article  Google Scholar 

  12. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): toward medical xai. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2020). https://doi.org/10.1109/TNNLS.2020.3027314

    Article  Google Scholar 

  13. Messalas, A., Kanellopoulos, Y., Makris, C.: Model-agnostic interpretability with shapley values. In: 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA), pp. 1–7 (2019). https://doi.org/10.1109/IISA.2019.8900669. IEEE

  14. Da Cruz, H.F., Pfahringer, B., Martensen, T., Schneider, F., Meyer, A., Böttinger, E., Schapranow, M.-P.: Using interpretability approaches to update black-box clinical prediction models: an external validation study in nephrology. Artif. Intell. Med. 111, 101982 (2021). https://doi.org/10.1016/j.artmed.2020.101982

    Article  Google Scholar 

  15. Pedapati, T., Balakrishnan, A., Shanmugam, K., Dhurandhar, A.: Learning global transparent models consistent with local contrastive explanations. Adv. Neural. Inf. Process. Syst. 33, 3592–3602 (2020)

    Google Scholar 

  16. Moraffah, R., Karami, M., Guo, R., Raglin, A., Liu, H.: Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explor. Newslett. 22(1), 18–33 (2020). https://doi.org/10.1145/3400051.3400058

    Article  Google Scholar 

  17. Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116(44), 22071–22080 (2019). https://doi.org/10.1073/pnas.1900654116

    Article  MathSciNet  MATH  Google Scholar 

  18. Reyes, M., Meier, R., Pereira, S., Silva, C.A., Dahlweid, F.-M., Tengg-Kobligk, H.V., Summers, R.M., Wiest, R.: On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology 2(3), 190043 (2020). https://doi.org/10.1148/ryai.2020190043

    Article  Google Scholar 

  19. Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: A survey. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 55–76. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_4

    Chapter  Google Scholar 

  20. Qin, Z., Yu, F., Liu, C., Chen, X.: How convolutional neural network see the world-a survey of convolutional neural network visualization methods (2018). at print

  21. Yuan, H., Chen, Y., Hu, X., Ji, S.: Interpreting deep models for text analysis via optimization and regularization methods. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5717–5724 (2019) https://doi.org/10.1609/aaai.v33i01.33015717

  22. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable ai: A brief survey on history, research areas, approaches and challenges. In: CCF International Conference on Natural Language Processing and Chinese Computing, pp. 563–574 (2019) https://doi.org/10.1007/978-3-030-32236-6_51. Springer

  23. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.-R.: Unmasking clever hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1–8 (2019)

    Article  Google Scholar 

  24. Kohlbrenner, M., Bauer, A., Nakajima, S., Binder, A., Samek, W., Lapuschkin, S.: Towards best practice in explaining neural network decisions with lrp. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2020). https://doi.org/10.1109/IJCNN48605.2020.9206975. IEEE

  25. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193–209 (2019) https://doi.org/10.1007/978-3-030-28954-6_10

  26. Gu, J., Yang, Y., Tresp, V.: Understanding individual decisions of cnns via contrastive backpropagation. In: Asian Conference on Computer Vision, pp. 119–134 (2018) https://doi.org/10.1007/978-3-030-20893-6_8. Springer

  27. Lee, J.R., Kim, S., Park, I., Eo, T., Hwang, D.: Relevance-cam: Your model already knows where to look. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14944–14953 (2021)

  28. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153 (2017). PMLR

  29. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016). at reprint

  30. Ras, G., Xie, N., van Gerven, M., Doran, D.: Explainable deep learning: a field guide for the uninitiated. J. Artif. Intell. Res. 73, 329–397 (2022). https://doi.org/10.1613/jair.1.13200

    Article  MathSciNet  MATH  Google Scholar 

  31. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9

    Chapter  Google Scholar 

  32. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)

  33. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

  34. Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., Hu, X.: Score-cam: Score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 24–25 (2020)

  35. Ramaswamy, H.G.: Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 983–991 (2020)

  36. Zhang, Q., Rao, L., Yang, Y.: Group-cam: Group score-weighted visual explanations for deep convolutional networks. arXiv preprint arXiv:2103.13859 (2021). at reprint

  37. Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847 (2018). IEEE

  38. Joshi, A., Mishra, G., Sivaswamy, J.: Explainable disease classification via weakly-supervised segmentation. In: Interpretable and Annotation-Efficient Learning for Medical Image Computing, pp. 54–62 (2020). https://doi.org/10.1007/978-3-030-61166-8_6

  39. Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.-R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021). https://doi.org/10.1109/JPROC.2021.3060483

    Article  Google Scholar 

  40. Pintelas, E., Livieris, I.E., Pintelas, P.: A grey-box ensemble model exploiting black-box accuracy and white-box intrinsic interpretability. Algorithms 13(1), 17 (2020). https://doi.org/10.3390/a13010017

    Article  MathSciNet  Google Scholar 

  41. Mi, J.-X., Li, A.-D., Zhou, L.-F.: Review study of interpretation methods for future interpretable machine learning. IEEE Access 8, 191969–191985 (2020). https://doi.org/10.1109/ACCESS.2020.3032756

    Article  Google Scholar 

  42. Wang, J., Gou, L., Zhang, W., Yang, H., Shen, H.-W.: Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation. IEEE Trans. Vis. Comput. Graph. 25(6), 2168–2180 (2019). https://doi.org/10.1109/TVCG.2019.2903943

    Article  Google Scholar 

  43. Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)

    Article  Google Scholar 

  44. Wang, L., Yoon, K.-J.: Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell. (2021). https://doi.org/10.1109/TPAMI.2021.3055564

    Article  Google Scholar 

  45. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019). https://doi.org/10.1145/3359786

    Article  Google Scholar 

  46. Mohankumar, A.K., Nema, P., Narasimhan, S., Khapra, M.M., Srinivasan, B.V., Ravindran, B.: Towards transparent and explainable attention models. arXiv preprint arXiv:2004.14243 (2020). at reprint

  47. Serrano, S., Smith, N.A.: Is attention interpretable? arXiv preprint arXiv:1906.03731 (2019). at reprint

  48. Jain, S., Wallace, B.C.: Attention is not explanation. arXiv preprint arXiv:1902.10186 (2019). at reprint

  49. Wiegreffe, S., Pinter, Y.: Attention is not not explanation. arXiv preprint arXiv:1908.04626 (2019) at reprint

  50. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R.: Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fus. 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  51. Margot, V., Luta, G.: A new method to compare the interpretability of rule-based algorithms. AI 2(4), 621–635 (2021). https://doi.org/10.3390/ai2040037

  52. Kind, A., Azzopardi, G.: An explainable ai-based computer aided detection system for diabetic retinopathy using retinal fundus images. In: International Conference on Computer Analysis of Images and Patterns, pp. 457–468 (2019) https://doi.org/10.1007/978-3-030-29888-3_37. Springer

  53. de La Torre, J., Valls, A., Puig, D.: A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing 396, 465–476 (2020). https://doi.org/10.1016/j.neucom.2018.07.102

    Article  Google Scholar 

  54. Kumar, D., Taylor, G.W., Wong, A.: Discovery radiomics with clear-dr: interpretable computer aided diagnosis of diabetic retinopathy. IEEE Access 7, 25891–25896 (2019). https://doi.org/10.1109/ACCESS.2019.2893635

    Article  Google Scholar 

  55. Jiang, H., Yang, K., Gao, M., Zhang, D., Ma, H., Qian, W.: An interpretable ensemble deep learning model for diabetic retinopathy disease classification. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2045–2048 (2019) https://doi.org/10.1109/EMBC.2019.8857160. IEEE

  56. Jiang, H., Xu, J., Shi, R., Yang, K., Zhang, D., Gao, M., Ma, H., Qian, W.: A multi-label deep learning model with interpretable grad-cam for diabetic retinopathy classification. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1560–1563 (2020) https://doi.org/10.1109/EMBC44109.2020.9175884. IEEE

  57. Chetoui, M., Akhloufi, M.A.: Explainable diabetic retinopathy using efficientnet. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1966–1969 (2020). https://doi.org/10.1109/EMBC44109.2020.9175664. IEEE

  58. Li, L., Xu, M., Wang, X., Jiang, L., Liu, H.: Attention based glaucoma detection: a large-scale database and cnn model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10571–10580 (2019)

  59. Liao, W., Zou, B., Zhao, R., Chen, Y., He, Z., Zhou, M.: Clinical interpretable deep learning model for glaucoma diagnosis. IEEE J. Biomed. Health Inform. 24(5), 1405–1412 (2019). https://doi.org/10.1109/JBHI.2019.2949075

    Article  Google Scholar 

  60. Mojab, N., Noroozi, V., Philip, S.Y., Hallak, J.A.: Deep multi-task learning for interpretable glaucoma detection. In: 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI), pp. 167–174 (2019). https://doi.org/10.1109/IRI.2019.00037. IEEE

  61. Fang, L., Wang, C., Li, S., Rabbani, H., Chen, X., Liu, Z.: Attention to lesion: Lesion-aware convolutional neural network for retinal optical coherence tomography image classification. IEEE Trans. Med. Imaging 38(8), 1959–1970 (2019). https://doi.org/10.1109/TMI.2019.2898414

    Article  Google Scholar 

  62. Liu, J., Zhao, G., Fei, Y., Zhang, M., Wang, Y., Yu, Y.: Align, attend and locate: Chest x-ray diagnosis via contrast induced attention network with limited supervision. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10632–10641 (2019)

  63. Guan, Q., Huang, Y.: Multi-label chest x-ray image classification via category-wise residual attention learning. Pattern Recogn. Lett. 130, 259–266 (2020). https://doi.org/10.1016/j.patrec.2018.10.027

    Article  Google Scholar 

  64. Huang, Z., Fu, D.: Diagnose chest pathology in x-ray images by learning multi-attention convolutional neural network. In: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 294–299 (2019). https://doi.org/10.1109/ITAIC.2019.8785431. IEEE

  65. Zhang, X., Chen, T.: Attention u-net for interpretable classification on chest x-ray image. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 901–908 (2020) https://doi.org/10.1109/BIBM49941.2020.9313354 IEEE

  66. Li, Y., Gu, D., Wen, Z., Jiang, F., Liu, S.: Classify and explain: An interpretable convolutional neural network for lung cancer diagnosis. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1065–1069 (2020) https://doi.org/10.1109/ICASSP40776.2020.9054605 IEEE

  67. Kumar, D., Sankar, V., Clausi, D., Taylor, G.W., Wong, A.: Sisc: End-to-end interpretable discovery radiomics-driven lung cancer prediction via stacked interpretable sequencing cells. IEEE Access 7, 145444–145454 (2019). https://doi.org/10.1109/ACCESS.2019.2945524

    Article  Google Scholar 

  68. Shen, S., Han, S.X., Aberle, D.R., Bui, A.A., Hsu, W.: An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst. Appl. 128, 84–95 (2019). https://doi.org/10.1016/j.eswa.2019.01.048

    Article  Google Scholar 

  69. Shen, S., Han, S.X., Aberle, D.R., Bui, A.A., Hsu, W.: Explainable hierarchical semantic convolutional neural network for lung cancer diagnosis. In: CVPR Workshops, pp. 63–66 (2019)

  70. Jiang, H., Shen, F., Gao, F., Han, W.: Learning efficient, explainable and discriminative representations for pulmonary nodules classification. Pattern Recogn. 113, 107825 (2021). https://doi.org/10.1016/j.patcog.2021.107825

    Article  Google Scholar 

  71. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)

    MathSciNet  MATH  Google Scholar 

  72. Ramchandani, A., Fan, C., Mostafavi, A.: Deepcovidnet: An interpretable deep learning model for predictive surveillance of covid-19 using heterogeneous features and their interactions. IEEE Access 8, 159915–159930 (2020). https://doi.org/10.1109/ACCESS.2020.3019989

    Article  Google Scholar 

  73. Casiraghi, E., Malchiodi, D., Trucco, G., Frasca, M., Cappelletti, L., Fontana, T., Esposito, A.A., Avola, E., Jachetti, A., Reese, J.: Explainable machine learning for early assessment of covid-19 risk prediction in emergency departments. IEEE Access 8, 196299–196325 (2020). https://doi.org/10.1109/ACCESS.2020.3034032

    Article  Google Scholar 

  74. Shi, W., Tong, L., Zhuang, Y., Zhu, Y., Wang, M.D.: Exam: an explainable attention-based model for covid-19 automatic diagnosis. In: Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, pp. 1–6 (2020)

  75. Wu, Y.-H., Gao, S.-H., Mei, J., Xu, J., Fan, D.-P., Zhang, R.-G., Cheng, M.-M.: Jcs: an explainable COVID-19 diagnosis system by joint classification and segmentation. IEEE Trans. Image Process. 30, 3113–3126 (2021). https://doi.org/10.1109/TIP.2021.3058783

    Article  Google Scholar 

  76. Brunese, L., Mercaldo, F., Reginelli, A., Santone, A.: Explainable deep learning for pulmonary disease and coronavirus covid-19 detection from x-rays. Comput. Methods Programs Biomed. 196, 105608 (2020). https://doi.org/10.1016/j.cmpb.2020.105608

    Article  Google Scholar 

  77. Singh, R.K., Pandey, R., Babu, R.N.: Covidscreen: explainable deep learning framework for differential diagnosis of covid-19 using chest x-rays. Neural Comput. Appl. 33(14), 8871–8892 (2021)

    Article  Google Scholar 

  78. Karim, M.R., Döhmen, T., Cochez, M., Beyan, O., Rebholz-Schuhmann, D., Decker, S.: Deepcovidexplainer: explainable covid-19 diagnosis from chest x-ray images. In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1034–1037 (2020) https://doi.org/10.1109/BIBM49941.2020.9313304.IEEE

  79. Alshazly, H., Linse, C., Barth, E., Martinetz, T.: Explainable covid-19 detection using chest CT scans and deep learning. Sensors 21(2), 455 (2021). https://doi.org/10.3390/s21020455

    Article  Google Scholar 

  80. Tang, Z., Chuang, K.V., DeCarli, C., Jin, L.-W., Beckett, L., Keiser, M.J., Dugger, B.N.: Interpretable classification of Alzheimer’s disease pathologies with a convolutional neural network pipeline. Nat. Commun. 10(1), 1–14 (2019)

    Google Scholar 

  81. Nigri, E., Ziviani, N., Cappabianco, F., Antunes, A., Veloso, A.: Explainable deep cnns for mri-based diagnosis of alzheimer’s disease. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2020) IEEE

  82. Wang, N., Chen, M., Subbalakshmi, K.P.: Explainable cnn-attention networks (c-attention network) for automated detection of alzheimer’s disease. arXiv preprint arXiv:2006.14135 (2020) at reprint

  83. El-Sappagh, S., Alonso, J.M., Islam, S., Sultan, A.M., Kwak, K.S.: A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep. 11(1), 1–26 (2021)

    Article  Google Scholar 

  84. Zeng, Z., Shen, Z., Tan, B.T.H., Chin, J.J., Leung, C., Wang, Y., Chi, Y., Miao, C.: Explainable and argumentation-based decision making with qualitative preferences for diagnostics and prognostics of alzheimer’s disease. In: Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 17, pp. 816–826 (2020)

  85. Achilleos, K.G., Leandrou, S., Prentzas, N., Kyriacou, P.A., Kakas, A.C., Pattichis, C.S.: Extracting explainable assessments of alzheimer’s disease via machine learning on brain mri imaging data. In: 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), pp. 1036–1041 (2020) https://doi.org/10.1109/BIBE50027.2020.00175. IEEE

  86. Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of parkinson’s disease using lime on datscan imagery. Comput. Biol. Med. 126, 104041 (2020). https://doi.org/10.1016/j.compbiomed.2020.104041

    Article  Google Scholar 

  87. Cavaliere, F., Della Cioppa, A., Marcelli, A., Parziale, A., Senatore, R.: Parkinson’s disease diagnosis: towards grammar-based explainable artificial intelligence. In: 2020 IEEE Symposium on Computers and Communications (ISCC), pp. 1–6 (2020) https://doi.org/10.1109/ISCC50000.2020.9219616. IEEE

  88. Van Steenkiste, T., Deschrijver, D., Dhaene, T.: Interpretable ecg beat embedding using disentangled variational auto-encoders. In: 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), pp. 373–378 (2019) https://doi.org/10.1109/CBMS.2019.00081. IEEE

  89. Mousavi, S., Afghah, F., Acharya, U.R.: Han-ecg: an interpretable atrial fibrillation detection model using hierarchical attention networks. Comput. Biol. Med. 127, 104057 (2020). https://doi.org/10.1016/j.compbiomed.2020.104057

    Article  Google Scholar 

  90. Clough, J.R., Oksuz, I., Puyol-Antón, E., Ruijsink, B., King, A.P., Schnabel, J.A.: Global and local interpretability for cardiac MRI classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 656–664 (2019) https://doi.org/10.1007/978-3-030-32251-9_72.Springer

  91. Puyol-Antón, E., Chen, C., Clough, J.R., Ruijsink, B., Sidhu, B.S., Gould, J., Porter, B., Elliott, M., Mehta, V., Rueckert, D.: Interpretable deep models for cardiac resynchronisation therapy response prediction. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 284–293 (2020) https://doi.org/10.1007/978-3-030-59710-8_28.Springer

  92. Aghamohammadi, M., Madan, M., Hong, J.K., Watson, I.: Predicting heart attack through explainable artificial intelligence. In: International Conference on Computational Science, pp. 633–645 (2019) https://doi.org/10.1007/978-3-030-22741-8_45. Springer

  93. Jones, O.T., Ranmuthu, C.K., Hall, P.N., Funston, G., Walter, F.M.: Recognising skin cancer in primary care. Adv. Ther. 37(1), 603–616 (2020)

    Article  Google Scholar 

  94. Barata, C., Celebi, M.E., Marques, J.S.: Explainable skin lesion diagnosis using taxonomies. Pattern Recogn. 110, 107413 (2021). https://doi.org/10.1016/j.patcog.2020.107413

    Article  Google Scholar 

  95. Nguyen, D.M.H., Ezema, A., Nunnari, F., Sonntag, D.: A visually explainable learning system for skin lesion detection using multiscale input with attention u-net. In: German Conference on Artificial Intelligence (Künstliche Intelligenz), pp. 313–319 (2020) https://doi.org/10.1007/978-3-030-58285-2_28 Springer

  96. Jiang, S., Li, H., Jin, Z.: A visually interpretable deep learning framework for histopathological image-based skin cancer diagnosis. IEEE J. Biomed. Health Inform. 25(5), 1483–1494 (2021). https://doi.org/10.1109/JBHI.2021.3052044

    Article  Google Scholar 

  97. Gu, R., Wang, G., Song, T., Huang, R., Aertsen, M., Deprest, J., Ourselin, S., Vercauteren, T., Zhang, S.: Ca-net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans. Med. Imaging 40(2), 699–711 (2020). https://doi.org/10.1109/TMI.2020.3035253

    Article  Google Scholar 

  98. Stieler, F., Rabe, F., Bauer, B.: Towards domain-specific explainable ai: model interpretation of a skin image classifier using a human approach. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1802–1809 (2021)

  99. van der Velden, B.H., Ragusi, M.A., Janse, M.H., Loo, C.E., Gilhuijs, K.G.: Interpretable deep learning regression for breast density estimation on MRI. In: Medical Imaging 2020: Computer-Aided Diagnosis, vol. 11314, p. 1131412 (2020) International Society for Optics and Photonics. https://doi.org/10.1117/12.2549003

  100. Shen, Y., Wu, N., Phang, J., Park, J., Liu, K., Tyagi, S., Heacock, L., Kim, S.G., Moy, L., Cho, K.: An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med. Image Anal. 68, 101908 (2021). https://doi.org/10.1016/j.media.2020.101908

    Article  Google Scholar 

  101. Sabol, P., Sinčák, P., Ogawa, K., Hartono, P.: explainable classifier supporting decision-making for breast cancer diagnosis from histopathological images. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019) https://doi.org/10.1109/IJCNN.2019.8852070.IEEE

  102. Beykikhoshk, A., Quinn, T.P., Lee, S.C., Tran, T., Venkatesh, S.: Deeptriage: interpretable and individualised biomarker scores using attention mechanism for the classification of breast cancer sub-types. BMC Med. Genomics 13(3), 1–10 (2020)

    Google Scholar 

  103. Gu, D., Su, K., Zhao, H.: A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif. Intell. Med. 107, 101858 (2020). https://doi.org/10.1016/j.artmed.2020.101858

    Article  Google Scholar 

  104. Thomas, S.M., Lefevre, J.G., Baxter, G., Hamilton, N.A.: Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer. Med. Image Anal. 68, 101915 (2021). https://doi.org/10.1016/j.media.2020.101915

    Article  Google Scholar 

  105. Li, H., Zhou, J., Zhou, Y., Chen, J., Gao, F., Xu, Y., Gao, X.: Automatic and interpretable model for periodontitis diagnosis in panoramic radiographs. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 454–463 (2020) https://doi.org/10.1007/978-3-030-59713-9_44.Springer

  106. Vasquez-Morales, G.R., Martinez-Monterrubio, S.M., Moreno-Ger, P., Recio-Garcia, J.A.: Explainable prediction of chronic renal disease in the Colombian population using neural networks and case-based reasoning. IEEE Access 7, 152900–152910 (2019). https://doi.org/10.1109/ACCESS.2019.2948430

    Article  Google Scholar 

  107. Penafiel, S., Baloian, N., Sanson, H., Pino, J.A.: Predicting stroke risk with an interpretable classifier. IEEE Access 9, 1154–1166 (2020). https://doi.org/10.1109/ACCESS.2020.3047195

    Article  Google Scholar 

  108. Chary, M., Boyer, E.W., Burns, M.M.: Diagnosis of acute poisoning using explainable artificial intelligence. Comput. Biol. Med. 134, 104469 (2021). https://doi.org/10.1016/j.compbiomed.2021.104469

    Article  Google Scholar 

  109. Liu, F., Wu, X., Ge, S., Fan, W., Zou, Y.: Exploring and distilling posterior and prior knowledge for radiology report generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13753–13762 (2021)

  110. Kwon, B.C., Choi, M.-J., Kim, J.T., Choi, E., Kim, Y.B., Kwon, S., Sun, J., Choo, J.: Retainvis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans. Visual Comput. Graphics 25(1), 299–309 (2018). https://doi.org/10.1109/TVCG.2018.2865027

    Article  Google Scholar 

  111. Lucieri, A., Bajwa, M.N., Dengel, A., Ahmed, S.: Achievements and challenges in explaining deep learning based computer-aided diagnosis systems. arXiv preprint arXiv:2011.13169 (2020) at reprint

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (61976106, 61772242); China Postdoctoral Science Foundation (2017M611737); Six talent peaks project in Jiangsu Province, China (DZXX-122); Key special projects of health and family planning science and technology in Zhenjiang City, China (SHW2017019); Innovation capacity building Foundation of Jilin Provincial Development and Reform Commission (2021C038-7).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Lu.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Communicated by J. Gao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Teng, Q., Liu, Z., Song, Y. et al. A survey on the interpretability of deep learning in medical diagnosis. Multimedia Systems 28, 2335–2355 (2022). https://doi.org/10.1007/s00530-022-00960-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-022-00960-4

Keywords