Abstract
Artificial Intelligence algorithms have now become pervasive in multiple high-stakes domains. However, their internal logic can be obscure to humans. Explainable Artificial Intelligence aims to design tools and techniques to illustrate the predictions of the so-called black-box algorithms. The Human-Computer Interaction community has long stressed the need for a more user-centered approach to Explainable AI. This approach can benefit from research in user interface, user experience, and visual analytics. This paper proposes a visual-based method to illustrate rules paired with feature importance. A user study with 15 participants was conducted comparing our visual method with the original output of the algorithm and textual representation to test its effectiveness with users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abdul, A.M., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.S.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda (2018). https://doi.org/10.1145/3173574.3174156
Andrienko, N.V., Andrienko, G.L., Adilova, L., Wrobel, S., Rhyne, T.-M.: Visual analytics for human-centered machine learning. IEEE Comput. Graphics Appl. 42(1), 123–133 (2022). https://doi.org/10.1109/MCG.2021.3130314
Cheng, H.F., et al.: Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders (2019). https://doi.org/10.1145/3290605.3300789
Chromik, M., Butz, A.: Human-xai interaction: a review and design principles for explanation user interfaces (2021). https://doi.org/10.1007/978-3-030-85616-8_36
Chromik, M., Schuessler, M.: A taxonomy for human subject evaluation of black-box explanations in XAI (2020). https://ceur-ws.org/Vol-2582/paper9.pdf
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing (2020). https://aclanthology.org/2020.aacl-main.46/
Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions (2019). https://doi.org/10.1145/3301275.3302316
Freitas, A.A.: Comprehensible classification models: a position paper. SIGKDD Explor. 15(1), 1–10 (2013). https://doi.org/10.1145/2594473.2594475
Guidotti, R., Monreale, A., Giannotti, F., Pedreschi, D., Ruggieri, S., Turini, F.: Factual and counterfactual explanations for black box decision making. IEEE Intell. Syst. 34(6), 14–23 (2019). https://doi.org/10.1109/MIS.2019.2957223
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2019b). https://doi.org/10.1145/3236009
Gunning, D., Aha, D.W.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850. https://doi.org/10.1609/aimag.v40i2.2850
Hart, S.G.: Nasa-task load index (nasa-tlx); 20 years later (2006)
Hart, S.G., Staveland, L.E.: Development of nasa-tlx (task load index): results of empirical and theoretical research. In: Advances in psychology, vol. 52, pp. 139–183. Elsevier (1988)
Kulesza, T., Stumpf, S., Burnett, M.M., Yang, S., Kwan, I., Wong, W.-K.: Too much, too little, or just right? ways explanations impact end users’ mental models (2013). https://doi.org/10.1109/VLHCC.2013.6645235
Vera Liao, Q., Varshney, K.R.: Human-centered explainable AI (XAI): from algorithms to user experiences. CoRR, abs/2110.10790 (2021). https://arxiv.org/abs/2110.10790
Vera Liao, Q., Gruen, D.M., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences (2020). https://doi.org/10.1145/3313831.3376590
Lim, B.Y., Dey, A.K.: Toolkit to support intelligibility in context-aware applications (2010). https://doi.org/10.1145/1864349.1864353
Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018). https://doi.org/10.1145/3233231
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions (2017)
Madumal, P., Miller, T., Sonenberg, T., Vetere, F.: A grounded interaction protocol for explainable artificial intelligence (2019). http://dl.acm.org/citation.cfm?id=3331801
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007
Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. CoRR, abs/1712.00547 (2017). http://arxiv.org/abs/1712.00547
Ming, Y., Huamin, Q., Bertini, E.: Rulematrix: visualizing and understanding classifiers with rules. IEEE Trans. Vis. Comput. Graph. 25(1), 342–352 (2019). https://doi.org/10.1109/TVCG.2018.2864812
Mucha, H., Robert, S., Breitschwerdt, R., Fellmann, M.: Interfaces for explanations in human-ai interaction: Proposing a design evaluation approach (2021). https://doi.org/10.1145/3411763.3451759
O’Brien, H.L., Cairns, P.A., Hall, M.: A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. Int. J. Hum Comput Stud. 112, 28–39 (2018). https://doi.org/10.1016/j.ijhcs.2018.01.004
O’Brien, H.: Theoretical perspectives on user engagement. Why engagement matters: Cross-disciplinary perspectives of user engagement in digital media, pp. 1–26 (2016)
Preece, A.D., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI. CoRR abs/1810.00184 (2018). http://arxiv.org/abs/1810.00184
Preece, J., Sharp, H., Rogers, Y.: Interaction design: beyond human-computer interaction. John Wiley & Sons (2015)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 1527–1535. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982
Schaffer, J., Giridhar, P., Jones, D., Höllerer, T., Abdelzaher, T.F., O’Donovan, J.: Getting the message? a study of explanation interfaces for microblog data analysis (2015). https://doi.org/10.1145/2678025.2701406
Sharp, H., Preece, J., Rogers, Y.: Interaction Design: Beyond Human-Computer Interaction. Wiley, 2019. ISBN 9781119547259. https://books.google.it/books?id=HreODwAAQBAJ
Sovrano, F., Vitali, F., Palmirani, M.: Making things explainable vs explaining: requirements and challenges under the GDPR. CoRR, abs/2110.00758 (2021). https://arxiv.org/abs/2110.00758
Wang, D., Yang, Q., Abdul, A.M., Lim, B.Y.: Designing theory-driven user-centric explainable AI (2019). https://doi.org/10.1145/3290605.3300831
Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? (2020). https://doi.org/10.1145/3377325.3377480
Acknowledgements
This work has been supported by the European Community Horizon 2020 programme under the funding scheme ERC-2018-ADG G.A. 834756 XAI: Science and technology for the eXplanation of AI decision making, by the European Union’s Horizon Europe Programme under the CREXDATA project, grant agreement no. 101092749, by the Next Generation EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3, PE0000013 - “Future Artificial Intelligence Research - FAIR” - CUP: H97G22000210007 and “SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics” - Prot. IR0000013.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cappuccio, E., Fadda, D., Lanzilotti, R., Rinzivillo, S. (2025). FIPER: A Visual-Based Explanation Combining Rules and Feature Importance. In: Meo, R., Silvestri, F. (eds) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2135. Springer, Cham. https://doi.org/10.1007/978-3-031-74633-8_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-74633-8_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-74632-1
Online ISBN: 978-3-031-74633-8
eBook Packages: Artificial Intelligence (R0)