Skip to main content
Log in

Marina Marie-Claire Höhne

Nachvollziehbare Künstliche Intelligenz: Methoden, Chancen und Risiken

Black Box zu White Box

  • Schwerpunkt
  • Published:
Datenschutz und Datensicherheit - DuD Aims and scope Submit manuscript

Zusammenfassung

Die Erklärbare Künstliche Intelligenz (KI) ist zu einem Schlüsselthema geworden, da es scheint, dass dadurch die Lücke zwischen KI in der Forschung und KI in der realen Anwendung geschlossen werden kann. Dadurch soll das große Potential, das KI für viele Bereiche bereithält (Krebsforschung, Klimaforschung) nutzbar gemacht werden können. Aber können wir den Erklärungen der KI überhaupt trauen? Denn auch sie können manipuliert werden und somit Risiken für Individuen darstellen. Eine rigorose Analyse über Methoden der Erklärbaren Künstlichen Intelligenz, ihre Chancen und Risiken.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Literatur

  1. McKinney, Scott Mayer, et al. “International evaluation of an AI system for breast cancer screening.” Nature 577.7788 (2020): 89-94.

    Article  Google Scholar 

  2. International Federation of Robotics. “The impact of Robots on Productivity, Employment and Jobs.” (2017).

  3. IEA (International Energy Agency). “2018 Global Status Report, Towards a Zero-emission, Efficient and Resilient Buildings and Construction Sector.” report for the Global Alliance for Buildings and Construction (GlobalABC) (2018).

  4. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R. (Eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer LNAI 11700 (2019)

  5. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 93.

    Google Scholar 

  6. Bach, Sebastian, et al. “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.” PloS one 10.7 (2015): e0130140.

    Article  Google Scholar 

  7. Montavon, Grégoire, et al. “Explaining nonlinear classification decisions with deep taylor decomposition.” Pattern Recognition65 (2017): 211-222.

    Article  Google Scholar 

  8. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096.

    Article  Google Scholar 

  9. Zhou, Bolei, et al. “Learning deep features for discriminative localization.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

  10. Selvaraju, Ramprasaath R., et al. “Grad-cam: Visual explanations from deep networks via gradient-based localization.” Proceedings of the IEEE international conference on computer vision. 2017.

  11. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “” Why should i trust you?” Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.

  12. Vidovic, M.M.C., Görnitz, N., Müller, K.R. and Kloft, M., 2016. Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567.

  13. Reimers, Christian, Jakob Runge, and Joachim Denzler. “Determining the Relevance of Features for Deep Neural Networks.” European Conference on Computer Vision. Springer, Cham, 2020.

    Google Scholar 

  14. Erhan, Dumitru, et al. “Visualizing higher-layer features of a deep network.” University of Montreal 1341.3 (2009): 1.

  15. Olah, et al., “Feature Visualization”, Distill, 2017.

  16. Rudin, Cynthia. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Nature Machine Intelligence 1.5 (2019): 206-215.

    Article  Google Scholar 

  17. Chen, Chaofan, et al. “This looks like that: deep learning for interpretable image recognition.” arXiv preprint arXiv:1806.10574 (2018).

  18. Dombrowski, Ann-Kathrin, et al. “Explanations can be manipulated and geometry is to blame.” arXiv preprint arXiv:1906.07983 (2019).

  19. Smilkov, Daniel, et al. “Smoothgrad: removing noise by adding noise.” arXiv preprint arXiv:1706.03825 (2017).

  20. Etmann, Christian, et al. “On the connection between adversarial robustness and saliency map interpretability.” arXiv preprint arXiv:1905.04172 (2019).

  21. Samek, W., Binder, A., Montavon, G., Lapuschkin, S. and Müller, K.R., 2016. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11), pp.2660-2673.

    Article  MathSciNet  Google Scholar 

  22. Adebayo, Julius, et al. “Sanity checks for saliency maps.” arXiv preprint arXiv:1810.03292 (2018).

  23. Hooker, Sara, et al. “A benchmark for interpretability methods in deep neural networks.” arXiv preprint arXiv:1806.10758 (2018).

  24. Davis, Brittany, et al. “Measure Utility, Gain Trust: Practical Advice for XAI Researchers.” 2020 IEEE Workshop on Trust and Expertise in Visual Analytics (TREX). IEEE, 2020.

  25. Bykov, Kirill, Höhne, Marina M.-C., Müller, Klaus-Robert, Nakajima, Shinichi, Kloft, Marius (2020). How Much Can I Trust You?--Quantifying Uncertainties in Explaining Neural Networks. arXiv preprint arXiv:2006.09000.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marina Marie-Claire Höhne.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Höhne, MC. Nachvollziehbare Künstliche Intelligenz: Methoden, Chancen und Risiken . Datenschutz Datensich 45, 453–456 (2021). https://doi.org/10.1007/s11623-021-1470-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11623-021-1470-x

Navigation