Skip to main content

INFEATURE: An Interactive Feature-Based-Explanation Framework for Non-technical Users

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14050))

Included in the following conference series:

Abstract

The field of explainable artificial intelligence (XAI) aims to make AI systems more understandable to humans. However, current XAI research often produces explanations that convey only one aspect of the information, ignoring the complementary nature of local and global explanations in the decision-making process. To address this issue, this study introduces an interactive interface based on feature-based explanations generated by SHAP. The interface presents feature-based explanations in an interactive and staggered manner, bridging the gap between local explanations and the overall understanding of the model. It allows users to explore datasets, models, and predictions in a self-discovery process that yields insights into model behavior in interaction with visual and verbal explanations. The interface also displays the confusion matrix in an intuitive way that takes the underlying data distributions into account.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://nikipi-infeature-app-becfsw.streamlit.app/.

References

  1. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–18. ACM. ISBN 978-1-4503-5620-6. https://doi.org/10.1145/3173574.3174156

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). 6, 52138–52160. ISSN 2169–3536. https://doi.org/10.1109/ACCESS.2018.2870052. https://ieeexplore.ieee.org/document/8466590/

  3. Akosa, J.: Predictive accuracy: A misleading performance measure for highly imbalanced data

    Google Scholar 

  4. Alicioglu, G., Sun, B.: A survey of visual analytics for explainable artificial intelligence methods, p. 19

    Google Scholar 

  5. Alqaraawi, A., Schuessler, M., Weiß, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks: a user study. http://arxiv.org/abs/2002.00772

  6. Baniecki, H., Parzych, D., Biecek, P.: The grammar of interactive explanatory model analysis. http://arxiv.org/abs/2005.00497

  7. Belle, V., Papantonis, I.: Principles and practice of explainable machine learning. http://arxiv.org/abs/2009.11698

  8. Cai, C.J., Winter, S., Steiner, D., Wilcox, L., Terry., M.: “hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. 3:1–24. ISSN 2573–0142. https://doi.org/10.1145/3359206

  9. Cheng, H.-F., et al.: Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12. ACM. ISBN 978-1-4503-5970-2. https://doi.org/10.1145/3290605.3300789. https://dl.acm.org/doi/10.1145/3290605.3300789

  10. Chromik, M.: reSHAPe: a framework for interactive explanations in XAI based on SHAP. ISSN 2510–2591. https://doi.org/10.18420/ECSCW2020_P06. https://dl.eusset.eu/handle/20.500.12015/3710. Publisher: European Society for Socially Embedded Technologies (EUSSET)

  11. Chromik,M., Eiband, M., Buchner, F., Krüger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: 26th International Conference on Intelligent User Interfaces, pp. 307–317. ACM. ISBN 978-1-4503-8017-1. https://doi.org/10.1145/3397481.3450644.https://dl.acm.org/doi/10.1145/3397481.3450644

  12. Gosiewska, A., Biecek, P.: Do not trust additive explanations. http://arxiv.org/abs/1903.11420

  13. Hacker, P., Passoth, J.-H.: Varieties of AI explanations under the law. from the GDPR to the AIA, and beyond, p. 32

    Google Scholar 

  14. Jin, W., Fan, J., Gromala, D., Pasquier, P., Hamarneh, G.: EUCA: practical prototyping framework towards end-user-centered explainable artificial intelligence. http://arxiv.org/abs/2102.02437

  15. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Vaughan, J.W.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM. ISBN 978-1-4503-6708-0. https://doi.org/10.1145/3313831.3376219. https://dl.acm.org/doi/10.1145/3313831.3376219u

  16. Lundberg, S., Lee, S.-I.: A unified approach to interpreting model predictions. http://arxiv.org/abs/1705.07874

  17. Maltbie, N., Niu, N., Van Doren, M., Johnson, R.: XAI tools in the public sector: a case study on predicting combined sewer overflows. In: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1032–1044. ACM. ISBN 978-1-4503-8562-6. https://doi.org/10.1145/3468264.3468547. https://dl.acm.org/doi/10.1145/3468264.3468547

  18. McDermid, J.A., Jia, Y., Porter, Z., Habli, I.: Artificial intelligence explainability: the technical and ethical dimensions, p. 18

    Google Scholar 

  19. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. http://arxiv.org/abs/1706.07269

  20. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. http://arxiv.org/abs/1811.11839

  21. Rathi, S.: Generating counterfactual and contrastive explanations using SHAP. http://arxiv.org/abs/1906.09293

  22. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. http://arxiv.org/abs/1911.02508

  23. Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., Chen, J.: Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. 60(1):1706–1710. ISSN 2169–5067. https://doi.org/10.1177/1541931213601392. Publisher: SAGE Publications Inc

  24. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M., Krishna, R.: Explanations can reduce overreliance on AI systems during decision-making. http://arxiv.org/abs/2212.06823

  25. Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. http://arxiv.org/abs/2006.00093

  26. van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: A comparison of rule-based and example-based explanations. 291, 103404. ISSN 00043702. https://doi.org/10.1016/j.artint.2020.103404. https://linkinghub.elsevier.com/retrieve/pii/S0004370220301533

  27. Wang, X., Yin, M.: Effects of explanations in AI-assisted decision making: Principles and comparisons. 12(4), 1–36. ISSN 2160–6455, 2160–6463. https://doi.org/10.1145/3519266. https://dl.acm.org/doi/10.1145/3519266

  28. Weld, D.S., Bansal, G.: The challenge of crafting intelligible intelligence. 62(6), 70–79. ISSN 0001–0782, 1557–7317. https://doi.org/10.1145/3282486. https://dl.acm.org/doi/10.1145/3282486

  29. Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viegas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models, pp. 1–1. ISSN 1077–2626, 1941–0506, 2160–9306. https://doi.org/10.1109/TVCG.2019.2934619. http://arxiv.org/abs/1907.04135

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yulu Pi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pi, Y. (2023). INFEATURE: An Interactive Feature-Based-Explanation Framework for Non-technical Users. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics