Skip to main content

Erklärbare KI in der medizinischen Diagnose – Erfolge und Herausforderungen

  • Chapter
  • First Online:
Künstliche Intelligenz im Gesundheitswesen

Zusammenfassung

Der große Erfolg moderner, bildbasierter KI-Methoden und das damit einhergehende Interesse für die Anwendung von KI in kritischen Entscheidungsprozessen führte zu einem Anstieg der Bemühungen, intelligente Systeme transparent und erklärbar zu gestalten. Besonders im medizinischen Kontext, wo computergestützte Entscheidungen direkten Einfluss auf die Behandlung und das Wohlsein von Patienten haben können, ist Transparenz für den sicheren Übergang von Forschung in die Praxis von höchster Wichtigkeit. Dieser Beitrag beschäftigt sich mit dem aktuellen Stand moderner Methoden zur Erklärung und Interpretation von Deep-Learning-basierten KI-Algorithmen in Anwendungen der medizinischen Forschung und Diagnose von Krankheiten. Zunächst werden erste bemerkenswerte Erfolge im Einsatz erklärbarer KI zur Validierung bekannter und Exploration potenzieller Biomarker sowie Methoden zur nachträglichen Korrektur von KI-Modellen aufgezeigt. Im Anschluss werden einige verbleibende Herausforderungen, die der Anwendung von KI als klinische Entscheidungshilfe im Weg stehen, kritisch diskutiert und Empfehlungen für die Ausrichtung zukünftiger Forschung ausgesprochen.

* Die Autoren haben zum Inhalt des Beitrags gleichermaßen beigetragen

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Literatur

  • AMA. (Hrsg.). (2018) AMA passes first policy recommendations on augmented intelligence. https://www.ama-assn.org/press-center/press-releases/ama-passes-first-policy-recommendations-augmented-intelligence. Zugegriffen: 13. Okt. 2020.

  • Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine, 1(1), 1–8.

    Article  Google Scholar 

  • Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31, 9505–9515.

    Google Scholar 

  • Alipour, K., Schulze, J. P., Yao, Y., Ziskind, A., & Burachas, G. (2020). A study on multimodal and interactive explanations for visual question answering. arXiv preprint arXiv:2003.00431.

  • Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., & Moore, G. J. (2018). Advanced machine learning in action: Identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Medicine, 1(1), 1–7.

    Article  Google Scholar 

  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., & Chatila, R. (2020) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

    Google Scholar 

  • Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755.

  • Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.

    Google Scholar 

  • Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba A. (2017). Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (S. 6541–6549), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Beam, A. L., & Kohane, I. S. (2016). Translating artificial intelligence into clinical care. JAMA, 316(22), 2368–2369.

    Article  Google Scholar 

  • Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., & Vardoulakis, L. M. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems (S. 1–12), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI).

    Google Scholar 

  • Buchanan, B., Sutherland, G., & Feigenbaum, E. A. (1969). Heuristic DENDRAL: A program for generating explanatory hypotheses in organic chemistry. In B. Meltzer & D. Michie (Hrsg.), Machine intelligence (Bd. 4, S. 209–254). Edinburgh University Press.

    Google Scholar 

  • Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517–518.

    Article  Google Scholar 

  • Cai, C. J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G. S., & Stumpe, M. C., & Terry, M. (2019a). Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–14), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI).

    Google Scholar 

  • Cai, C. J., Jongejan, J., & Holbrook, J. (2019b). The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces (S. 258–262), 16.03.–20.03.2019, Los Angeles, Special Interest Group on Computer-Human Interaction (SIGCHI).

    Google Scholar 

  • Carrieri A.P., Haiminen N., Maudsley-Barton S., Gardiner L.J., Murphy B., Mayes A., Paterson S., Grimshaw S., Winn M., Shand C., & Rowe, W. (2020). Explainable AI reveals key changes in skin microbiome associated with menopause, smoking, aging and skin hydration. bioRxiv.

    Google Scholar 

  • Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.

    Article  Google Scholar 

  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery (ACM), Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) (S. 1721–1730), 10.08.–13.08.2015, Sydney, SIGKDD,.

    Google Scholar 

  • Cole, E. B., Zhang, Z., Marques, H. S., Hendrick, R. E., Yaffe, M. J., & Pisano, E. D. (2014). Impact of computer-aided detection systems on radiologist accuracy with digital mammography. American Journal of Roentgenology, 203(4), 909–916.

    Article  Google Scholar 

  • Coppola, D., Kuan Lee, H., & Guan, C. (2020). Interpreting mechanisms of prediction for skin cancer diagnosis using multi-task learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, Institute of Electrical and Electronics Engineers, The Computer Vision Foundation (IEEE, CVF), virtuelle Konferenz (S. 734–735), 14.06.–19.06.2020, CVF.

    Google Scholar 

  • Couteaux, V., Nempont, O., Pizaine, G., & Bloch, I. (2019). Towards interpretability of segmentation networks by analyzing DeepDreams. Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797, 56–63.

    Google Scholar 

  • Cruz-Roa, A. A., Ovalle, J. E. A., Madabhushi A., & Osorio, F. A. G. (2013). A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In International conference on medical image computing and computer-assisted intervention, (S. 403–410), 22.09.–26.09.2013, Nagoya, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.

    Google Scholar 

  • DFKI. (2020a). exAID – Bringing the power of deep learning to clinical practice! Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI, Hrsg.). https://exaid.kl.dfki.de/. Zugegriffen: 13. Okt. 2020.

  • DFKI. (Hrsg.). (2020b). SCIN – SkinCare Image Analysis, Deutsches Forschungszentrum für Künstliche Intelligenz. http://www.dfki.de/skincare/classify.html. Zugegriffen: 13. Okt. 2020.

  • Data Language (UK) Ltd. (2020). SCOPA – Scalable, Explainable AI, Datalanguage (Hrsg.). https://datalanguage.com/scopa-scalable-explainable-ai. Zugegriffen: 13. Okt. 2020.

  • Decoded Health. (2020). The world’s first clinical hyperautomation platform – A force multiplier for physicians, Decoded Health (Hrsg.). https://www.decodedhealth.com/. Zugegriffen: 13. Okt. 2020.

  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L. (2009) Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 248–255), 20.06–25.06.2009, Miami, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Doshi-Velez, F., Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

  • Eitel, F., & Ritter, K. (2019). Testing the Robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer’s disease classification. Alzheimer’s Disease Neuroimaging Initiative (ADNI, Hrsg.). Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, 11797(1), 3–11.

    Google Scholar 

  • Elwyn, G., Scholl, I., Tietbohl, C., Mann, M., Edwards, A. G., Clay, C., Légaré, F., Van der Weijden, T., Lewis, C. L., Wexler, R. M., & Frosch, D. L. (2013). “Many miles to go”: A systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Medical Informatics and Decision Making, 13(2), 1–10.

    Google Scholar 

  • Erion, G., Janizek, J. D., Sturmfels, P., Lundberg, S., & Lee, S. I. (2019). Learning explainable models using attribution priors. arXiv preprint arXiv:1906.10670.

  • Essemlali, A., St-Onge, E., Descoteaux, M., & Jodoin, P. M. (2020). Understanding Alzheimer disease’s structural connectivity through explainable AI. Medical Imaging with Deep Learning, 121, 217–229 (PMLR).

    Google Scholar 

  • Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter S. M., Blau, H. M., & Thrun, S. (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.

    Google Scholar 

  • Fong, R., Patrick, M., & Vedaldi, A. (2019). Understanding deep networks via extremal perturbations and smooth masks. In Proceedings of the IEEE International conference on computer vision, Institute of Electrical and Electronics Engineers (IEEE) (S. 2950–2958), 27.10.–02.11.2019, Seoul, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32, 9277–9286.

    Google Scholar 

  • Ghosh, S., Elenius, D., Li, W., Lincoln, P., Shankar, N., & Steiner, W. (2016). ARSENAL: Automatic requirements specification extraction from natural language. In NASA Formal Methods Symposium (S. 41–46), 07.06.–09.06.2016, Minneapolis, National Aeronautics and Space Administration (NASA). Springer.

    Google Scholar 

  • Graziani, M., Andrearczyk, V., & Müller, H. (2019) Visualizing and interpreting feature reuse of pretrained CNNs for histopathology. In MVIP 2019: Irish machine vision and image processing conference proceedings, irish pattern recognition and classification society, 28.08.–30.08.2019, Dublin, Technological University Dublin.

    Google Scholar 

  • Graziani, M., Andrearczyk, V., & Müller, H. (2018). Regression concept vectors for bidirectional explanations in histopathology. Understanding and Interpreting Machine Learning in Medical Image Computing Applications, 11038, 124–132.

    Google Scholar 

  • Graziani, M., Otálora, S., Muller, H., & Andrearczyk V. (2020). Guiding CNNs towards relevant concepts by multi-task and adversarial learning. arXiv preprint arXiv:2008.01478.

  • Guan, J. (2019). Artificial intelligence in healthcare and medicine: Promises, ethical challenges and governance. Chinese Medical Sciences Journal, 34(2), 76–83.

    Google Scholar 

  • Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., & Kim, R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402–2410.

    Article  Google Scholar 

  • Hendricks, L. A., Hu, R., Darrell, T., & Akata, Z. (2018). Grounding visual explanations. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 264–279), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. In European conference on computer vision (S. 3–19), 08.10.–16.10.2016, Amsterdam, The Computer Vision Foundation (CVF). Springer.

    Google Scholar 

  • Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.

    Article  Google Scholar 

  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.

    Google Scholar 

  • Holzinger, A., Biemann, C., Pattichis, C. S., Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923.

  • Hooker, S., Erhan, D., Kindermans, P. J., & Kim, B. (2019). A benchmark for interpretability methods in deep neural networks. Advances in Neural Information Processing Systems, 32, 9737–9748.

    Google Scholar 

  • Huk Park, D., Hendricks, L. A., Akata, Z., Rohrbach, A., Schiele, B., Darrell, T., & Rohrbach, M. (2018) Multimodal explanations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineering (IEEE) (S. 8779–8788), 19.06.–21.06.2018, Salt Lake City, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Izadyyazdanabadi, M., Belykh, E., Cavallo, C., Zhao, X., Gandhi, S., Moreira, L. B., Eschbacher, J., Nakaji, P., Preul, M. C., & Yang, Y. (2018). Weakly-supervised learning-based feature localization for confocal laser endomicroscopy glioma images. In International conference on medical image computing and computer-assisted intervention (S. 300–308), 16.09.–20.09.2019, Granada, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.

    Google Scholar 

  • Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. Advances in Neural Information Processing Systems, 28, 2017–2025.

    Google Scholar 

  • Jansen, C., Penzel, T., Hodel, S., Breuer, S., Spott, M., & Krefting, D. (2019). Network physiology in insomnia patients: Assessment of relevant changes in network topology with interpretable machine learning models. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(12), 123129.

    Google Scholar 

  • Jetley, S., Lord, N. A., Lee, N., & Torr, P. H. (2018). Learn to pay attention. arXiv preprint arXiv:1804.02391.

  • Jirotka, M., Procter, R., Hartswood, M., Slack, R., Simpson, A., Coopmans, C., Hinds, C., & Voss, A. (2005). Collaboration and trust in healthcare innovation: The eDiaMoND case study. Computer Supported Cooperative Work (CSCW), 14(4), 369–398.

    Article  Google Scholar 

  • Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach H., & Wortman Vaughan, J. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (S. 1–14), 25.04.–30.04.2020, Honolulu, Special Interest Group on Computer-Human Interaction (SIGCHI).

    Google Scholar 

  • Kawahara, J., Daneshvar, S., Argenziano, G., & Hamarneh, G. (2018). Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE Journal of Biomedical and Health Informatics, Institute of Electrical and Electronics Engineers (IEEE), 23(2), 538–546.

    Google Scholar 

  • Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler J., & Viegas F. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). International Conference on Machine Learning, 80, 2668–2677.

    Google Scholar 

  • Kohli, A., & Jha, S. (2018). Why CAD failed in mammography. Journal of the American College of Radiology, 15(3), 535–537.

    Article  Google Scholar 

  • Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J., (2019) Faithful and customizable explanations of black box models. In Proceedings of the 2019 AAAI/ACM conference on AI, Ethics, and Society, Association for the Advancement of Artificial Intelligence, Association for Computing Machinery (AAAI, ACM) (S. 131–138), 27.01.–01.02.2019, Honolulu, AAAI.

    Google Scholar 

  • Lucieri, A., Bajwa, M. N., Dengel, A., & Ahmed, S. (2020b). Explaining ai-based decision support systems using concept localization maps. arXiv preprint arXiv:2005.01399.

  • Lucieri, A., Bajwa, M. N., Braun, S. A., Malik, M. I., Dengel, A., & Ahmed, S. (2020a). On interpretability of deep learning based skin lesion classifiers using concept activation vectors. In International Joint Conference on Neural Networks (IJCNN) (S. 1–10), 19.07.–24.07.2020, Glasgow, Computational Intelligence Society (CIS).

    Google Scholar 

  • Lundberg, S. M., & Lee ,S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774.

    Google Scholar 

  • Mahendran, A., & Vedaldi, A. (2016). Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, 120(3), 233–255.

    Article  Google Scholar 

  • Mitsuhara, M., Fukui, H., Sakashita, Y., Ogata, T., Hirakawa, T., Yamashita, T., & Fujiyoshi, H. (2019). Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:1905.03540.

  • Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.

    Google Scholar 

  • Munir, M., Siddiqui, S. A., Küsters, F., Mercier, D., Dengel, A., & Ahmed, S. (2019). TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features. In International conference on artificial neural networks (S. 426–439), 17.09.–19.09.2019, München, European Neural Network Society (ENNS). Springer.

    Google Scholar 

  • Nguyen, A. P., & Martínez, M. R. (2020). On quantitative aspects of model interpretability. arXiv preprint arXiv:2007.07584.

  • Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvintsev, A. (2018). The building blocks of interpretability. Distill, 3(3), e10.

    Google Scholar 

  • Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.

    Google Scholar 

  • Rabold, J., Deininger, H., Siebers, M., & Schmid, U. (2019). Enriching visual with verbal explanations for relational concepts–combining LIME with Aleph. In Joint European conference on machine learning and knowledge discovery in databases (S. 180–192), 16.09.–20.09.2019, Würzburg, Julius-Maximilians-Universität Würzburg. Springer.

    Google Scholar 

  • Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C. P., & Patel, B. N. (2018) Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine, 15(11), e1002686.

    Google Scholar 

  • Rat der Europäischen Union. (2016). Verordnung (EU) 2016/679 des Europäischen Parlaments und des Rates vom 27. April 2016 zum Schutz natürlicher Personen bei der Verarbeitung personenbezogener Daten, zum freien Datenverkehr und zur Aufhebung der Richtlinie 95/46/EG (Datenschutz-Grundverordnung). https://eur-lex.europa.eu/legal-content/DE/TXT/PDF/?uri=CELEX:32016R0679. Zugegriffen: 13. Okt. 2020.

  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016) “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, Association for Computing Machinery, Special Interest Group on Knowledge Discovery and Data Mining (ACM, SIGKDD) (S. 1135–1144), 13.08.–17.08.2016, San Francisco, SIGKDD.

    Google Scholar 

  • Rieger, L., Singh, C., Murdoch, W. J., & Yu, B. (2019). Interpretations are useful: Penalizing explanations to align neural networks with prior knowledge. arXiv preprint arXiv:1909.13584.

  • Ross, A. S., Hughes, M. C., & Doshi-Velez, F. (2017). Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (S. 2662–2670), 19.08.–25.08.2017, Melbourne, International Joint Conference on Artificial Intelligence (IJCAI).

    Google Scholar 

  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    Article  Google Scholar 

  • Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & Müller, K. R. (2016). Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2660–2673.

    Article  Google Scholar 

  • Sayres, R., Taly, A., Rahimy, E., Blumer, K., Coz, D., Hammel, N., Krause, J., Narayanaswamy, A., Rastegar, Z., Wu, D., Xu, S., Barb, S., Joseph, A., Shumski, M., Smith, J., Sood, A. B., Corrado, G. S., Peng, L., & Webster, D. R. (2019). Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology, 126(4), 552–564.

    Article  Google Scholar 

  • Selvaraju, R. R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., & Batra, D. (2016). Grad-CAM: Why did you say that? arXiv preprint arXiv:1611.07450.

  • Shortliffe, E. H. (1974). MYCIN: A rule-based computer program for advising physicians regarding antimicrobial therapy selection. In Proceedings of the 1974 Annual ACM conference – Volume 2, Association for Computing Machinery (ACM) (S. 2950–2958), San Diego, ACM.

    Google Scholar 

  • Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.

  • Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning important features through propagating activation differences. In International conference on machine learning, (S. 3145–3153), 06.08.–11.08.2017, Sydney, The International Machine Learning Society (IMLS), .

    Google Scholar 

  • Sikka, K., Silberfarb, A., Byrnes, J., Sur, I., Chow, E., Divakaran, A., & Rohwer, R. (2020). Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge into Deep Neural Networks. arXiv preprint arXiv:2003.07344.

  • Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.

  • Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020). Explainable deep learning models in medical image analysis. arXiv preprint arXiv:2005.13799.

  • Sonntag, D., Nunnari, F., & Profitlich, H. J. (2020). The Skincare project, an interactive deep learning system for differential diagnosis of malignant skin lesions. Technical report. arXiv preprint arXiv:2005.09448.

  • Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., & Cilar, L. (2020) Interpretability of machine learning based prediction models in healthcare. arXiv preprint arXiv:2002.08596.

  • Teach, R. L., & Shortliffe, E. H. (1981). An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research, 14(6), 542–558.

    Article  Google Scholar 

  • Tjoa, E., & Guan, C. (2019). A survey on explainable artificial intelligence (XAI): Towards medical XAI. arXiv preprint arXiv:1907.07374.

  • Tjoa, E., & Guan, C. (2020). Quantifying explainability of saliency methods in deep neural networks. arXiv preprint arXiv:2009.02899.

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008.

    Google Scholar 

  • Vilone, G., & Longo, L. (2020) Explainable artificial intelligence: A systematic review. arXiv preprint arXiv:2006.00093.

  • Yamamoto, Y., Tsuzuki, T., Akatsuka, J., Ueki, M., Morikawa, H., Numata, Y., Takahara, T., Tsuyuki, T., Tsutsumi, K., Nakazawa, R., & Shimizu, A. (2019). Automated acquisition of explainable knowledge from unannotated histopathology images. Nature Communications, 10(1), 1–9.

    Article  Google Scholar 

  • Yan, Y., Kawahara, J., Hamarneh, G. (2019). Melanoma recognition via visual attention. In International Conference on Information Processing in Medical Imaging (S. 793–804), 02.06.–07.06.2019, Hong Kong, The Hong Kong University of Science and Technology (HKUST). Springer.

    Google Scholar 

  • Yang, Q., Steinfeld, A., & Zimmerman, J. (2019b) Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI conference on human factors in computing systems (S. 1–11), 04.05.–09.05.2019, Glasgow, Special Interest Group on Computer-Human Interaction (SIGCHI).

    Google Scholar 

  • Yang, H. L., Kim, J. J., Kim, J. H., Kang, Y. K., Park, D. H., Park, H. S., Kim, H. K., & Kim, M. S. (2019a). Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PloS one, 14(4), e0215076.

    Google Scholar 

  • Zeiler, M. D., & Fergus, R. (2014) Visualizing and understanding convolutional networks. In European Conference on Computer Vision (ECCV), (S. 818–833), 06.09.–12.09.2014, Zürich, The Computer Vision Foundation (CVF). Springer.

    Google Scholar 

  • Zhang, R., Tan, S., Wang, R., Manivannan, S., Chen, J., Lin, H., & Zheng, W. S. (2019). Biomarker localization by combining CNN classifier and generative adversarial network. In International conference on medical image computing and computer-assisted intervention (S. 209–217), 13.10.–17.10.2019, Shenzhen, The Medical Image Computing and Computer Assisted Intervention Society (MICCAI). Springer.

    Google Scholar 

  • Zhang Z., Xie Y., Xing F., McGough M., Yang L. (2017) MDNet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Institute of Electrical and Electronics Engineers (IEEE) (S. 6428–6436), 21.07.–26.07.2017, Honolulu, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Zhou, B., Sun, Y., Bau, D., & Torralba A. (2018). Interpretable basis decomposition for visual explanation. In Proceedings of the European Conference on Computer Vision (ECCV) (S. 119–134), 08.09.–14.09.2018, München, The Computer Vision Foundation (CVF).

    Google Scholar 

  • Zicari, R. V. (2020). Z-Inspection®: A holistic and analytic process to assess trustworthy AI, z-inspection (Hrsg.). http://z-inspection.org/. Zugegriffen: 13. Okt. 2020.

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Adriano Lucieri or Muhammad Naseer Bajwa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lucieri, A., Bajwa, M.N., Dengel, A., Ahmed, S. (2022). Erklärbare KI in der medizinischen Diagnose – Erfolge und Herausforderungen. In: Pfannstiel, M.A. (eds) Künstliche Intelligenz im Gesundheitswesen. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-33597-7_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-33597-7_35

  • Published:

  • Publisher Name: Springer Gabler, Wiesbaden

  • Print ISBN: 978-3-658-33596-0

  • Online ISBN: 978-3-658-33597-7

  • eBook Packages: Business and Economics (German Language)

Publish with us

Policies and ethics