Zusammenfassung
Die Erklärbare Künstliche Intelligenz (KI) ist zu einem Schlüsselthema geworden, da es scheint, dass dadurch die Lücke zwischen KI in der Forschung und KI in der realen Anwendung geschlossen werden kann. Dadurch soll das große Potential, das KI für viele Bereiche bereithält (Krebsforschung, Klimaforschung) nutzbar gemacht werden können. Aber können wir den Erklärungen der KI überhaupt trauen? Denn auch sie können manipuliert werden und somit Risiken für Individuen darstellen. Eine rigorose Analyse über Methoden der Erklärbaren Künstlichen Intelligenz, ihre Chancen und Risiken.
Literatur
McKinney, Scott Mayer, et al. “International evaluation of an AI system for breast cancer screening.” Nature 577.7788 (2020): 89-94.
International Federation of Robotics. “The impact of Robots on Productivity, Employment and Jobs.” (2017).
IEA (International Energy Agency). “2018 Global Status Report, Towards a Zero-emission, Efficient and Resilient Buildings and Construction Sector.” report for the Global Alliance for Buildings and Construction (GlobalABC) (2018).
Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R. (Eds.), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer LNAI 11700 (2019)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 93.
Bach, Sebastian, et al. “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.” PloS one 10.7 (2015): e0130140.
Montavon, Grégoire, et al. “Explaining nonlinear classification decisions with deep taylor decomposition.” Pattern Recognition65 (2017): 211-222.
Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096.
Zhou, Bolei, et al. “Learning deep features for discriminative localization.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
Selvaraju, Ramprasaath R., et al. “Grad-cam: Visual explanations from deep networks via gradient-based localization.” Proceedings of the IEEE international conference on computer vision. 2017.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “” Why should i trust you?” Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
Vidovic, M.M.C., Görnitz, N., Müller, K.R. and Kloft, M., 2016. Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567.
Reimers, Christian, Jakob Runge, and Joachim Denzler. “Determining the Relevance of Features for Deep Neural Networks.” European Conference on Computer Vision. Springer, Cham, 2020.
Erhan, Dumitru, et al. “Visualizing higher-layer features of a deep network.” University of Montreal 1341.3 (2009): 1.
Olah, et al., “Feature Visualization”, Distill, 2017.
Rudin, Cynthia. “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” Nature Machine Intelligence 1.5 (2019): 206-215.
Chen, Chaofan, et al. “This looks like that: deep learning for interpretable image recognition.” arXiv preprint arXiv:1806.10574 (2018).
Dombrowski, Ann-Kathrin, et al. “Explanations can be manipulated and geometry is to blame.” arXiv preprint arXiv:1906.07983 (2019).
Smilkov, Daniel, et al. “Smoothgrad: removing noise by adding noise.” arXiv preprint arXiv:1706.03825 (2017).
Etmann, Christian, et al. “On the connection between adversarial robustness and saliency map interpretability.” arXiv preprint arXiv:1905.04172 (2019).
Samek, W., Binder, A., Montavon, G., Lapuschkin, S. and Müller, K.R., 2016. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11), pp.2660-2673.
Adebayo, Julius, et al. “Sanity checks for saliency maps.” arXiv preprint arXiv:1810.03292 (2018).
Hooker, Sara, et al. “A benchmark for interpretability methods in deep neural networks.” arXiv preprint arXiv:1806.10758 (2018).
Davis, Brittany, et al. “Measure Utility, Gain Trust: Practical Advice for XAI Researchers.” 2020 IEEE Workshop on Trust and Expertise in Visual Analytics (TREX). IEEE, 2020.
Bykov, Kirill, Höhne, Marina M.-C., Müller, Klaus-Robert, Nakajima, Shinichi, Kloft, Marius (2020). How Much Can I Trust You?--Quantifying Uncertainties in Explaining Neural Networks. arXiv preprint arXiv:2006.09000.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Höhne, MC. Nachvollziehbare Künstliche Intelligenz: Methoden, Chancen und Risiken . Datenschutz Datensich 45, 453–456 (2021). https://doi.org/10.1007/s11623-021-1470-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11623-021-1470-x