Abstract
Research on eXplainable AI (XAI) is continuously proposing novel approaches for the explanation of image classification models, where we can find both model-dependent and model-independent strategies. However, it is unclear how to choose the best explanation approach for a given image, as these novel XAI approaches are radically different. In this paper, we propose a CBR solution to the problem of choosing the best alternative for the explanation of an image classifier. The case base reflects the human perception of the quality of the explanations generated with different image explanation methods. Then, this experience is reused to select the best explanation approach for a given image.
Supported by the Horizon 2020 Future and Emerging Technologies (FET) programme of the European Union through the ERA-NET (CHIST-ERA-19-XAI-008 - PCI2020-120720-2) and the Spanish Committee of Economy and Competitiveness (TIN2017-87330-R).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004). https://doi.org/10.1023/B:VISI.0000022288.19776.77. http://link.springer.com/10.1023/B:VISI.0000022288.19776.77
Gates, L., Kisby, C., Leake, D.: CBR confidence as a basis for confidence in black box systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 95–109. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_7
Google LLC: AI Explanations Whitepaper pp. 1–28 (2019). https://storage.googleapis.com/cloud-ai-whitepapers/AI%20Explainability%20Whitepaper.pdf
Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: XRAI: better attributions through regions. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, pp. 4947–4956, June 2019. http://arxiv.org/abs/1906.02825
Keane, M.T., Kenny, E.M.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11
Krishna, R., et al.: Visual genome: connecting language and vision using crowdsourced dense image annotations (2016). https://arxiv.org/abs/1602.07332
Leake, D.B., McSherry, D.: Introduction to the special issue on explanation in case-based reasoning. Artif. Intell. Rev. 24(2), 103–108 (2005). https://doi.org/10.1007/s10462-005-4606-8
Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pp. 3530–3537. AAAI Press (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)
Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 19–36. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_2
Recio-Garcí-a, J.A., Dí-az-Agudo, B., Pino-Castilla, V.: CBR-LIME: a case-based reasoning approach to provide specific local interpretable model-agnostic explanations. In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 179–194. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_12
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pp. 1527–1535. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982
Roth-Berghofer, T., Richter, M.M.: On explanation. Künstliche Intell. KI 22(2), 5–7 (2008)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74
Shapley, L.S., Shubik, M.: The assignment game I: the core. Int. J. Game Theory 1(1), 111–130 (1971). https://doi.org/10.1007/BF01753437. http://link.springer.com/10.1007/BF01753437
Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning-perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005). https://doi.org/10.1007/s10462-005-4607-7
Sturmfels, P., Lundberg, S., Lee, S.I.: Visualizing the impact of feature attribution baselines. Distill (2020). https://doi.org/10.23915/distill.00022. https://distill.pub/2020/attribution-baselines
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 5109–5118, March 2017. http://arxiv.org/abs/1703.01365
Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842
Weber, R.O., Johs, A.J., Li, J., Huang, K.: Investigating textual case-based XAI. In: Cox, M.T., Funk, P., Begum, S. (eds.) ICCBR 2018. LNCS (LNAI), vol. 11156, pp. 431–447. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01081-2_29
Xu, S., Venugopalan, S., Sundararajan, M.: Attribution in scale and space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020
Yuan, H., Cai, L., Hu, X., Wang, J., Ji, S.: Interpreting image classifiers by generating discrete masks. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2020). https://doi.org/10.1109/TPAMI.2020.3028783
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Recio-García, J.A., Parejas-Llanovarced, H., Orozco-del-Castillo, M.G., Brito-Borges, E.E. (2021). A Case-Based Approach for the Selection of Explanation Algorithms in Image Classification. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds) Case-Based Reasoning Research and Development. ICCBR 2021. Lecture Notes in Computer Science(), vol 12877. Springer, Cham. https://doi.org/10.1007/978-3-030-86957-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-86957-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86956-4
Online ISBN: 978-3-030-86957-1
eBook Packages: Computer ScienceComputer Science (R0)