Skip to main content

Explainable AI for Medical Imaging: Knowledge Matters

  • Chapter
  • First Online:
Multi-faceted Deep Learning

Abstract

Cooperation between medical experts and virtual assistance depends on trust. Over recent years, machine learning algorithms have been able to construct models of high accuracy and predictive power. Yet as opposed to their earlier, hypothesis-driven counterparts, current data-driven models are increasingly criticized for their opaque decision-making process. Safety-critical applications such as self-driving cars or health status estimation cannot rely on benchmark-winning black-box models. They need prediction models which rationale and logic can be explained in an understandable, human-readable format, not just out of curiosity but also to highlight and deter potential biases. In this chapter we discuss how Explainable Artificial Intelligence (XAI) assesses such issues in medical imaging. We will also put focus on machine learning approaches developed for breast cancer diagnosis, and discuss the advent of deep learning in this particular domain. Indeed, despite promising results achieved over the last few years, advanced state of the art analysis identifies several important challenges faced by deep learning approaches. We will present the emerging trends and proposals to overcome these challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://adni.loni.usc.edu.

References

  1. Amina Adadi and Mohammed Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6:52138–52160, 2018.

    Article  Google Scholar 

  2. O. B. Ahmed, S. Fezzani, C. Guillevin, L. Fezai, M. Naudin, B. Gianelli, and C. Fernandez-Maloigne. Deepmrs: An end-to-end deep neural network for dementia disease detection using mrs data. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pages 1459–1463, 2020.

    Google Scholar 

  3. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), 2015.

    Google Scholar 

  4. Dragana Brzakovic, Xiao Mei Luo, and P Brzakovic. An approach to automated detection of tumors in mammograms. IEEE Transactions on Medical Imaging, 9(3):233–241, 1990.

    Google Scholar 

  5. Erik Brynjolfsson and ANDREW Mcafee. The business of artificial intelligence. Harvard Business Review, pages 1–20, 2017.

    Google Scholar 

  6. Charlynne Bolton, Veronika Machová, Maria Kovacova, and Katarina Valaskova. The power of human–machine collaboration: Artificial intelligence, business automation, and the smart economy. Economics, Management, and Financial Markets, 13(4):51–56, 2018.

    Article  Google Scholar 

  7. Min Chen, Yixue Hao, Kai Hwang, Lu Wang, and Lin Wang. Disease prediction by machine learning over big data from healthcare communities. Ieee Access, 5:8869–8879, 2017.

    Article  Google Scholar 

  8. Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6:27755, 2016.

    Google Scholar 

  9. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1721–1730, 2015.

    Google Scholar 

  10. Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8):832, 2019.

    Google Scholar 

  11. Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531, 2014.

    Google Scholar 

  12. Dhritiman Das, Eduardo Coello, Rolf F Schulte, and Bjoern H Menze. Quantification of metabolites in magnetic resonance spectroscopic imaging using machine learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 462–470. Springer, 2017.

    Google Scholar 

  13. Cüneyt Dirican. The impacts of robotics, artificial intelligence on business and economics. Procedia-Social and Behavioral Sciences, 195:564–573, 2015.

    Article  Google Scholar 

  14. Laura Delponte and G Tamburrini. European Artificial Intelligence (AI) leadership, the path for an integrated vision. European Parliament, 2018.

    Google Scholar 

  15. Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O’Brien, Stuart Schieber, James Waldo, David Weinberger, and Alexandra Wood. Accountability of ai under the law: The role of explanation. arXiv preprint arXiv:1711.01134, 2017.

    Google Scholar 

  16. Li Deng and Dong Yu. Deep learning: methods and applications. Foundations and trends in signal processing, 7(3–4):197–387, 2014.

    Article  MathSciNet  Google Scholar 

  17. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009.

    Google Scholar 

  18. Michael Eickenberg, Alexandre Gramfort, Gaël Varoquaux, and Bertrand Thirion. Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage, 152:184–194, 2017.

    Article  Google Scholar 

  19. Bradley J Erickson, Panagiotis Korfiatis, Zeynettin Akkus, and Timothy L Kline. Machine learning for medical imaging. Radiographics, 37(2):505–515, 2017.

    Google Scholar 

  20. European Commission (EC). Artificial intelligence for Europe. COM(2018) 237 final, April 2018. Communication from the Commission to the European parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions.

    Google Scholar 

  21. Samuel G Finlayson, John D Bowers, Joichi Ito, Jonathan L Zittrain, Andrew L Beam, and Isaac S Kohane. Adversarial attacks on medical machine learning. Science, 363(6433):1287–1289, 2019.

    Google Scholar 

  22. Ed Felten. Preparing for the future of artificial intelligence. Washington DC: The White House, May, 3, 2016.

    Google Scholar 

  23. Alberto Fernández, Juan M García-Segura, Tomás Ortiz, Julia Inés Escobar Montoya, Fernando Maestú, Pedro Gil-Gregorio, Pablo Campo, and Juan Carlos Viano. Proton magnetic resonance spectroscopy and magnetoencephalographic estimation of delta dipole density: a combination of techniques that may contribute to the diagnosis of alzheimer’s disease. Dementia and geriatric cognitive disorders, 20(2–3):169–77, 2005.

    Google Scholar 

  24. Ethan Fast and Eric Horvitz. Long-term trends in the public perception of artificial intelligence. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.

    Google Scholar 

  25. Jean-Marc Fellous, Guillermo Sapiro, Andrew Rossi, Helen S Mayberg, and Michele Ferrante. Explainable artificial intelligence for neuroscience: Behavioral neurostimulation. Frontiers in Neuroscience, 13:1346, 2019.

    Google Scholar 

  26. Xinyang Feng, Jie Yang, Zachary C Lipton, Scott A Small, Frank A Provenzano, Alzheimer’s Disease Neuroimaging Initiative, et al. Deep learning on MRI affirms the prominence of the hippocampal formation in alzheimer’s disease classification. bioRxiv, page 456277, 2018.

    Google Scholar 

  27. Xinyang Feng, Jie Yang, Andrew F Laine, and Elsa D Angelini. Discriminative analysis of the human cortex using spherical cnns-a study on alzheimer’s disease diagnosis. arXiv preprint arXiv:1812.07749, 2018.

    Google Scholar 

  28. F Gao and Peter B Barker. Various mrs application tools for alzheimer disease and mild cognitive impairment. American Journal of Neuroradiology, 35(6 suppl):S4–S11, 2014.

    Google Scholar 

  29. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.

    Google Scholar 

  30. Denise Guliato, Rangaraj M Rangayyan, Juliano D Carvalho, and Sérgio A Santiago. Polygonal modeling of contours of breast tumors with the preservation of spicules. IEEE Transactions on Biomedical Engineering, 55(1):14–20, 2007.

    Google Scholar 

  31. David Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2017.

    Google Scholar 

  32. Umut Güçlü and Marcel AJ van Gerven. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35(27):10005–10014, 2015.

    Google Scholar 

  33. Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and Hugo Larochelle. Brain tumor segmentation with deep neural networks. Medical image analysis, 35:18–31, 2017.

    Article  Google Scholar 

  34. High-Level Expert Group on Artificial Intelligence (AI HLEG). Ethics guidelines for trustworthy AI. COM(2018) 237 final, April 2019. Published by the European Commission.

    Google Scholar 

  35. Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M Roth, Heimo Müller, Robert Reihs, and Kurt Zatloukal. Towards the augmented pathologist: Challenges of explainable-ai in digital pathology. arXiv preprint arXiv:1712.06657, 2017.

    Google Scholar 

  36. Alon Halevy, Peter Norvig, and Fernando Pereira. The unreasonable effectiveness of data. IEEE Intelligent Systems, 24(2):8–12, 2009.

    Article  Google Scholar 

  37. Eric J Hobsbawm. The machine breakers. Past & Present, (1):57–70, 1952.

    Google Scholar 

  38. Milo Honegger. Shedding light on black box machine learning algorithms: Development of an axiomatic framework to assess the quality of methods that explain individual predictions. arXiv preprint arXiv:1808.05054, 2018.

    Google Scholar 

  39. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.

    Google Scholar 

  40. Naimul Mefraz Khan, Nabila Abraham, and Marcia Hon. Transfer learning with intelligent training data selection for prediction of alzheimer’s disease. IEEE Access, 7:72726–72735, 2019.

    Google Scholar 

  41. Sreenath P Kyathanahally, André Döring, and Roland Kreis. Deep learning approaches for detection and removal of ghosting artifacts in mr spectroscopy. Magnetic resonance in medicine, 80(3):851–863, 2018.

    Google Scholar 

  42. Imene Cheikhrouhou Kachouri, Khalifa Djemal, and Hichem Maaref. Characterisation of mammographic masses using a new spiculated mass descriptor in computer aided diagnosis systems. International Journal of Signal and Imaging Systems Engineering, 5(2):132–142, 2012.

    Google Scholar 

  43. Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un) reliability of saliency methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pages 267–280. Springer, 2019.

    Google Scholar 

  44. Incheol Kim, Sivaramakrishnan Rajaraman, and Sameer Antani. Visual interpretation of convolutional neural network predictions in classifying medical image modalities. Diagnostics, 9(2), 2019.

    Google Scholar 

  45. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

    Google Scholar 

  46. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). arXiv preprint arXiv:1711.11279, 2017.

    Google Scholar 

  47. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.

    Article  Google Scholar 

  48. Yin Lou, Rich Caruana, and Johannes Gehrke. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 150–158, 2012.

    Google Scholar 

  49. Zachary C Lipton. The mythos of model interpretability. Queue, 16(3):31–57, 2018.

    Google Scholar 

  50. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765–4774, 2017.

    Google Scholar 

  51. Chunfeng Lian, Mingxia Liu, Li Wang, and Dinggang Shen. End-to-end dementia status prediction from brain MRI using multi-task weakly-supervised attention network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 158–167. Springer, 2019.

    Google Scholar 

  52. Mengchen Liu, Jiaxin Shi, Zhen Li, Chongxuan Li, Jun Zhu, and Shixia Liu. Towards better analysis of deep convolutional neural networks. IEEE transactions on visualization and computer graphics, 23(1):91–100, 2016.

    Article  Google Scholar 

  53. Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. Nature communications, 10(1):1–8, 2019.

    Article  Google Scholar 

  54. Choong Ho Lee and Hyung-Jin Yoon. Medical big data: promise and challenges. Kidney research and clinical practice, 36(1):3, 2017.

    Google Scholar 

  55. Travis B Murdoch and Allan S Detsky. The inevitable application of big data to health care. Jama, 309(13):1351–1352, 2013.

    Google Scholar 

  56. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.

    Google Scholar 

  57. Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38, 2019.

    Article  MathSciNet  Google Scholar 

  58. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition, 65:211–222, 2017.

    Article  Google Scholar 

  59. Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, and Feng Lu. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, page 107332, 2020.

    Google Scholar 

  60. Christoph Molnar. Interpretable machine learning. Lulu. com, 2020.

    Google Scholar 

  61. Jorge G Moser. Integration of artificial intelligence and simulation in a comprehensive decision-support system. Simulation, 47(6):223–229, 1986.

    Google Scholar 

  62. Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1–15, 2018.

    Article  MathSciNet  Google Scholar 

  63. Alessandro Nuvolari et al. The “machine breakers” and the industrial revolution. Journal of European Economic History, 31(2):393–426, 2002.

    Google Scholar 

  64. Fabián Narváez and Eduardo Romero. Breast mass classification using orthogonal moments. In International Workshop on Digital Mammography, pages 64–71. Springer, 2012.

    Google Scholar 

  65. Anh Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. arXiv preprint arXiv:1602.03616, 2016.

    Google Scholar 

  66. Cathy O’neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.

    Google Scholar 

  67. A Rahimi. Machine learning has become alchemy. In Thirsty-first Conference on Neural Information Processing Systems, 2017.

    Google Scholar 

  68. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.

    Google Scholar 

  69. Rangaraj M Rangayyan, Nema M El-Faramawy, JE Leo Desautels, and Onsy Abdel Alim. Measures of acutance and shape for classification of breast tumors. IEEE Transactions on medical imaging, 16(6):799–810, 1997.

    Google Scholar 

  70. Andrea Renda. Artificial intelligence: Ethics, governance and policy challenges. CEPS Task Force Report, 2019.

    Google Scholar 

  71. Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717, 2017.

    Google Scholar 

  72. Ali Rahimi and Ben Recht. Reflections on random kitchen sinks, 2017.

    Google Scholar 

  73. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “why should I trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.

    Google Scholar 

  74. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.

    Article  Google Scholar 

  75. Daniele Ravì, Charence Wong, Fani Deligianni, Melissa Berthelot, Javier Andreu-Perez, Benny Lo, and Guang-Zhong Yang. Deep learning for health informatics. IEEE journal of biomedical and health informatics, 21(1):4–21, 2016.

    Article  Google Scholar 

  76. Deniz Susar and Vincenzo Aquaro. Artificial intelligence: Opportunities and challenges for the public sector. In Proceedings of the 12th International Conference on Theory and Practice of Electronic Governance, pages 418–426, 2019.

    Google Scholar 

  77. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.

    Google Scholar 

  78. Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.

    Article  Google Scholar 

  79. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.

    Google Scholar 

  80. Nathalie A Smuha. The eu approach to ethics guidelines for trustworthy artificial intelligence. CRi-Computer Law Review International, 2019.

    Google Scholar 

  81. Jimeng Sun and Chandan K Reddy. Big data analytics for healthcare. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1525–1525, 2013.

    Google Scholar 

  82. David Sculley, Jasper Snoek, Alex Wiltschko, and Ali Rahimi. Winner’s curse? on pace, progress, and empirical rigor. 2018.

    Google Scholar 

  83. Tyler C Steed, Jeffrey M Treiber, Michael G Brandel, Kunal S Patel, Anders M Dale, Bob S Carter, and Clark C Chen. Quantification of glioblastoma mass effect by lateral ventricle displacement. Scientific reports, 8(1):1–8, 2018.

    Google Scholar 

  84. Christian Szegedy, Alexander Toshev, and Dumitru Erhan. Deep neural networks for object detection. In Advances in neural information processing systems, pages 2553–2561, 2013.

    Google Scholar 

  85. Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841, 2019.

    Google Scholar 

  86. Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296, 2017.

    Google Scholar 

  87. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

    Google Scholar 

  88. Siuly Siuly and Yanchun Zhang. Medical big data: neurological diseases diagnosis through medical data analysis. Data Science and Engineering, 1(2):54–64, 2016.

    Article  Google Scholar 

  89. Xiaowei Song, Ningnannan Zhang, Ryan D’Arcy, Steven Beyea, Robert Bartha, Denise Bernier, Sultan Darvesh, and Kenneth Rockwood. Increased creatine in the posterior cingulate cortex in early alzheimer’s disease: A high-field magnetic resonance spectroscopy study. Alzheimer’s & Dementia, 8(4, Supplement):P35, 2012. Alzheimer ’s Association International Conference 2012.

    Google Scholar 

  90. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.

    Google Scholar 

  91. Ziqi Tang, Kangway V Chuang, Charles DeCarli, Lee-Way Jin, Laurel Beckett, Michael J Keiser, and Brittany N Dugger. Interpretable classification of alzheimer’s disease pathologies with a convolutional neural network pipeline. Nature communications, 10(1):1–14, 2019.

    Google Scholar 

  92. Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (XAI): Towards medical XAI. arXiv preprint arXiv:1907.07374, 2019.

    Google Scholar 

  93. Hidenori Tanaka, Aran Nayebi, Niru Maheswaranathan, Lane McIntosh, Stephen Baccus, and Surya Ganguli. From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction. In Advances in Neural Information Processing Systems, pages 8535–8545, 2019.

    Google Scholar 

  94. Nima Tajbakhsh, Jae Y Shin, Suryakanth R Gurudu, R Todd Hurst, Christopher B Kendall, Michael B Gotway, and Jianming Liang. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE transactions on medical imaging, 35(5):1299–1312, 2016.

    Google Scholar 

  95. Michael Van Lent, William Fisher, and Michael Mancuso. An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the national conference on artificial intelligence, pages 900–907. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2004.

    Google Scholar 

  96. Ge Wang. A perspective on deep imaging. Ieee Access, 4:8914–8924, 2016.

    Article  Google Scholar 

  97. Scott A Wright and Ainslie E Schultz. The rising tide of artificial intelligence and business automation: Developing an ethical framework. Business Horizons, 61(6):823–832, 2018.

    Google Scholar 

  98. Hui Wang, Lan Tan, Hui-Fu Wang, Ying Liu, Rui-Hua Yin, Wen-Ying Wang, Xiao-Long Chang, Teng Jiang, and Jin-Tai Yu. Magnetic resonance spectroscopy in alzheimer’s disease: Systematic review and meta-analysis. Journal of Alzheimer’s disease: JAD, 46 4:1049–70, 2015.

    Google Scholar 

  99. Miles N Wernick, Yongyi Yang, Jovan G Brankov, Grigori Yourganov, and Stephen C Strother. Machine learning in medical imaging. IEEE signal processing magazine, 27(4):25–38, 2010.

    Google Scholar 

  100. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 30(9):2805–2824, 2019.

    Article  MathSciNet  Google Scholar 

  101. Evangeline Yee, Karteek Popuri, Mirza Faisal Beg, and Alzheimer’s Disease Neuroimaging Initiative. Quantifying brain metabolism from FDG-PET images into a probability of alzheimer’s dementia score. Human brain mapping, 41(1):5–16, 2020.

    Google Scholar 

  102. Evangeline Yee, Karteek Popuri, Mirza Faisal Beg, and the Alzheimer’s Disease Neuroimaging Initiative. Quantifying brain metabolism from FDG-PET images into a probability of alzheimer’s dementia score. Human Brain Mapping, 41(1):5–16, 2020.

    Google Scholar 

  103. Chengliang Yang, Anand Rangarajan, and Sanjay Ranka. Visual explanations from deep 3d convolutional neural networks for alzheimer’s disease classification. In AMIA Annual Symposium Proceedings, volume 2018, page 1571. American Medical Informatics Association, 2018.

    Google Scholar 

  104. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.

    Google Scholar 

  105. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pascal Bourdon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Cite this chapter

Bourdon, P., Ahmed, O.B., Urruty, T., Djemal, K., Fernandez-Maloigne, C. (2021). Explainable AI for Medical Imaging: Knowledge Matters. In: Benois-Pineau, J., Zemmari, A. (eds) Multi-faceted Deep Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-74478-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-74478-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-74477-9

  • Online ISBN: 978-3-030-74478-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics