Skip to main content

What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification?

  • Conference paper
  • First Online:
Book cover Ophthalmic Medical Image Analysis (OMIA 2020)

Abstract

Deep learning methods for ophthalmic diagnosis have shown success for tasks like segmentation and classification but their implementation in the clinical setting is limited by the black-box nature of the algorithms. Very few studies have explored the explainability of deep learning in this domain. Attribution methods explain the decisions by assigning a relevance score to each input feature. Here, we present a comparative analysis of multiple attribution methods to explain the decisions of a convolutional neural network (CNN) in retinal disease classification from OCT images. This is the first such study to perform both quantitative and qualitative analyses. The former was performed using robustness, runtime, and sensitivity while the latter was done by a panel of eye care clinicians who rated the methods based on their correlation with diagnostic features. The study emphasizes the need for developing explainable models that address the end-user requirements, hence increasing the clinical acceptance of deep learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018)

    Article  Google Scholar 

  2. Sengupta, S., Singh, A., Leopold, H.A., Gulati, T., Lakshminarayanan, V.: Ophthalmic diagnosis using deep learning with fundus images-a critical review. Artif. Intell. Med. 102, 101758 (2020)

    Article  Google Scholar 

  3. Abràmoff, M., et al.: Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Invest. Ophthalmol. Vis. Sci. 57(13), 5200–5206 (2016)

    Article  Google Scholar 

  4. Ruamviboonsuk, P., et al.: Deep learning versus human graders for classifying diabetic retinopathy severity in a nationwide screening program. NPJ Digit. Med. 2(1), 1–9 (2019)

    Article  Google Scholar 

  5. Yang, H.L., et al.: Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PLOS One 14(4), e0215076 (2019)

    Article  Google Scholar 

  6. Sayres, R., et al.: Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 126(4), 552–564 (2019)

    Article  Google Scholar 

  7. Singh, A., Sengupta, S., Abdul Rasheed, M., Zelek, J., Lakshminarayanan, V.: Interpretation of deep learning using attributions: application to ophthalmic diagnosis. In: Proceedings of the Applications of Machine Learning. International Society for Optics and Photonics (SPIE) (2020, in press)

    Google Scholar 

  8. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020). https://doi.org/10.1145/3313831.3376219

  9. Wang, Z., Mardziel, P., Datta, A., Fredrikson, M.: Interpreting interpretations: Organizing attribution methods by criteria. arXiv preprint arXiv:2002.07985 (2020)

  10. Eitel, F., Ritter, K.: Testing the robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer’s disease classification. In: Suzuki, K., et al. (eds.) ML-CDS/IMIMIC - 2019. LNCS, vol. 11797, pp. 3–11. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33850-3_1

    Chapter  Google Scholar 

  11. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  12. Kermany, D., Goldbaum, M.: Labeled optical coherence tomography (OCT) and Chest X-Ray images for classification. Mendeley Data, Version 2 (2018).https://doi.org/10.17632/RSCBJBR9SJ.2

  13. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017)

  14. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  15. Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., Cilar, L.: Interpretability of machine learning based prediction models in healthcare. arXiv preprint arXiv:2002.08596 (2020)

  16. Singh, A., Sengupta, S., Lakshminarayanan, V.: Explainable deep learning models in medical image analysis. J. Imaging 6(6), 52 (2020)

    Article  Google Scholar 

  17. Leopold, H., Zelek, J., Lakshminarayanan, V.: Deep learning methods applied to retinal image analysis. In: Sejdic, E., Falk, T. (eds.) Signal Processing and Machine Learning for Biomedical Big Data, pp. 329–365. CRC Press (2018)

    Google Scholar 

  18. Leopold, H., Sengupta, S., Singh, A., Lakshminarayanan, V.: Deep learning on optical coherence tomography for ophthalmology. In: El-Baz, A. (ed.) State-of-the-Art in Neural Networks. Elsevier, NY (2020)

    Google Scholar 

  19. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2017)

  20. Kermany, D.S., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018)

    Article  Google Scholar 

  21. Singh, A., Sengupta, S., Lakshminarayanan, V.: Glaucoma diagnosis using transfer learning methods. In: Proceedings of the Applications of Machine Learning, vol. 11139, p. 111390U. International Society for Optics and Photonics (SPIE) (2019)

    Google Scholar 

  22. Sengupta, S., Singh, A., Zelek, J., Lakshminarayanan, V.: Cross-domain diabetic retinopathy detection using deep learning. In: Applications of Machine Learning, vol. 11139, p. 111390V. International Society for Optics and Photonics (2019)

    Google Scholar 

  23. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  24. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)

  25. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014)

  26. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS One 10(7), e0130140 (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  27. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3319–3328. JMLR.org (2017)

    Google Scholar 

  28. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R.: Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn. 65, 211–222 (2017)

    Article  Google Scholar 

  29. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016)

  30. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)

  31. Chen, H., Lundberg, S., Lee, S.I.: Explaining models by propagating Shapley values of local components. arXiv preprint arXiv:1911.11888 (2019)

  32. Alber, M., et al.: iNNvestigate neural networks! J. Mach. Learn. Res. 20(93), 1–8 (2019)

    Google Scholar 

  33. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR.org (2017)

    Google Scholar 

  34. Ancona, M., Öztireli, C., Gross, M.: Explaining deep neural networks with a polynomial time algorithm for Shapley values approximation. arXiv preprint arXiv:1903.10992 (2019)

  35. Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660–2673 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

This work is supported by an NSERC Discovery Grant and NVIDIA Titan V GPU Grant to V.L. This research was enabled in part by Compute Canada (www.computecanada.ca).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amitojdeep Singh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Singh, A. et al. (2020). What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification?. In: Fu, H., Garvin, M.K., MacGillivray, T., Xu, Y., Zheng, Y. (eds) Ophthalmic Medical Image Analysis. OMIA 2020. Lecture Notes in Computer Science(), vol 12069. Springer, Cham. https://doi.org/10.1007/978-3-030-63419-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63419-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63418-6

  • Online ISBN: 978-3-030-63419-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics