Abstract
The field of explainable artificial intelligence (XAI) aims to make AI systems more understandable to humans. However, current XAI research often produces explanations that convey only one aspect of the information, ignoring the complementary nature of local and global explanations in the decision-making process. To address this issue, this study introduces an interactive interface based on feature-based explanations generated by SHAP. The interface presents feature-based explanations in an interactive and staggered manner, bridging the gap between local explanations and the overall understanding of the model. It allows users to explore datasets, models, and predictions in a self-discovery process that yields insights into model behavior in interaction with visual and verbal explanations. The interface also displays the confusion matrix in an intuitive way that takes the underlying data distributions into account.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–18. ACM. ISBN 978-1-4503-5620-6. https://doi.org/10.1145/3173574.3174156
Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). 6, 52138–52160. ISSN 2169–3536. https://doi.org/10.1109/ACCESS.2018.2870052. https://ieeexplore.ieee.org/document/8466590/
Akosa, J.: Predictive accuracy: A misleading performance measure for highly imbalanced data
Alicioglu, G., Sun, B.: A survey of visual analytics for explainable artificial intelligence methods, p. 19
Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks: a user study. http://arxiv.org/abs/2002.00772
Baniecki, H., Parzych, D., Biecek, P.: The grammar of interactive explanatory model analysis. http://arxiv.org/abs/2005.00497
Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. http://arxiv.org/abs/2009.11698
Cai, C.J., Winter, S., Steiner, D., Wilcox, L., Terry., M.: “hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. 3:1–24. ISSN 2573–0142. https://doi.org/10.1145/3359206
Cheng, H.-F., et al.: Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12. ACM. ISBN 978-1-4503-5970-2. https://doi.org/10.1145/3290605.3300789. https://dl.acm.org/doi/10.1145/3290605.3300789
Chromik, M.: reSHAPe: a framework for interactive explanations in XAI based on SHAP. ISSN 2510–2591. https://doi.org/10.18420/ECSCW2020_P06. https://dl.eusset.eu/handle/20.500.12015/3710. Publisher: European Society for Socially Embedded Technologies (EUSSET)
Chromik,M., Eiband, M., Buchner, F., Krüger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: 26th International Conference on Intelligent User Interfaces, pp. 307–317. ACM. ISBN 978-1-4503-8017-1. https://doi.org/10.1145/3397481.3450644.https://dl.acm.org/doi/10.1145/3397481.3450644
Gosiewska, A., Biecek, P.: Do not trust additive explanations. http://arxiv.org/abs/1903.11420
Hacker, P., Passoth, J.-H.: Varieties of AI explanations under the law. from the GDPR to the AIA, and beyond, p. 32
Jin, W., Fan, J., Gromala, D., Pasquier, P., Hamarneh, G.: EUCA: practical prototyping framework towards end-user-centered explainable artificial intelligence. http://arxiv.org/abs/2102.02437
Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Vaughan, J.W.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM. ISBN 978-1-4503-6708-0. https://doi.org/10.1145/3313831.3376219. https://dl.acm.org/doi/10.1145/3313831.3376219u
Lundberg, S., Lee, S.-I.: A unified approach to interpreting model predictions. http://arxiv.org/abs/1705.07874
Maltbie, N., Niu, N., Van Doren, M., Johnson, R.: XAI tools in the public sector: a case study on predicting combined sewer overflows. In: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1032–1044. ACM. ISBN 978-1-4503-8562-6. https://doi.org/10.1145/3468264.3468547. https://dl.acm.org/doi/10.1145/3468264.3468547
McDermid, J.A., Jia, Y., Porter, Z., Habli, I.: Artificial intelligence explainability: the technical and ethical dimensions, p. 18
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. http://arxiv.org/abs/1706.07269
Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. http://arxiv.org/abs/1811.11839
Rathi, S.: Generating counterfactual and contrastive explanations using SHAP. http://arxiv.org/abs/1906.09293
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. http://arxiv.org/abs/1911.02508
Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., Chen, J.: Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. 60(1):1706–1710. ISSN 2169–5067. https://doi.org/10.1177/1541931213601392. Publisher: SAGE Publications Inc
Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M., Krishna, R.: Explanations can reduce overreliance on AI systems during decision-making. http://arxiv.org/abs/2212.06823
Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. http://arxiv.org/abs/2006.00093
van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: A comparison of rule-based and example-based explanations. 291, 103404. ISSN 00043702. https://doi.org/10.1016/j.artint.2020.103404. https://linkinghub.elsevier.com/retrieve/pii/S0004370220301533
Wang, X., Yin, M.: Effects of explanations in AI-assisted decision making: Principles and comparisons. 12(4), 1–36. ISSN 2160–6455, 2160–6463. https://doi.org/10.1145/3519266. https://dl.acm.org/doi/10.1145/3519266
Weld, D.S., Bansal, G.: The challenge of crafting intelligible intelligence. 62(6), 70–79. ISSN 0001–0782, 1557–7317. https://doi.org/10.1145/3282486. https://dl.acm.org/doi/10.1145/3282486
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viegas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models, pp. 1–1. ISSN 1077–2626, 1941–0506, 2160–9306. https://doi.org/10.1109/TVCG.2019.2934619. http://arxiv.org/abs/1907.04135
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pi, Y. (2023). INFEATURE: An Interactive Feature-Based-Explanation Framework for Non-technical Users. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-35891-3_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35890-6
Online ISBN: 978-3-031-35891-3
eBook Packages: Computer ScienceComputer Science (R0)