Abstract
Integrating AI-driven recommender systems has proven highly successful in many industries, prompting the banking sector to explore personalised client recommendations. Given the interpersonal nature of banking sales to corporate clients, wherein AI systems recommend to Relationship Managers who facilitate interactions with clients, there is a critical need for explainability in the AI-generated recommendations to support commercial activities. Our work leverages Generative AI and Large Language Models to synthesise natural language explanations for AI algorithms’ motivations, tailored for non-technical users in the banking environment. Through a case study in a major bank, Intesa Sanpaolo, our approach successfully replaces manual expert labour, offering scalable, efficient, and business-relevant explanations. Our study addresses key research questions and contributes by presenting an enriched presentation of SHAP explainer outputs in banking, validated against expert standards. We also explore the impact on the business, providing insights into the value of transparent AI-driven recommendations in the evolving landscape of banking services.
The views and opinions expressed are those of the authors and do not necessarily reflect the views of Intesa Sanpaolo, its affiliates, or employees.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The authors did not engage in defining the specific business Key Performance Indicators (KPIs) used to evaluate these business values. These results were shared with the authors by the company, which owns the definition of the specific business KPIs used for evaluating these business values and their monitoring.
References
Ali, T., Kostakos, P.: Huntgpt: integrating machine learning-based anomaly detection and explainable ai with large language models (llms). arXiv preprint arXiv:2309.16021 (2023)
Amatriain, X., Basilico, J.: Recommender systems in industry: a netflix case study. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 385–419. Springer, Boston, MA (2015). https://doi.org/10.1007/978-1-4899-7637-6_11
Brennen, A.: What do people really want when they say they want “explainable ai?” we asked 60 stakeholders. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–7 (2020)
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)
Cambria, E., Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: A survey on xai and natural language explanations. Inf. Process. Manage. 60(1), 103111 (2023). https://doi.org/10.1016/j.ipm.2022.103111
Castelnovo, A., Cosentini, A., Malandri, L., Mercorio, F., Mezzanzanica, M.: Fftree: a flexible tree to handle multiple fairness criteria. Inf. Process. Manage. 59(6), 103099 (2022)
Chaves, A.P., Gerosa, M.A.: How should my chatbot interact? a survey on social characteristics in human-chatbot interaction design. Int. J. Hum.-Comput. Interact. 37(8), 729–758 (2021)
Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)
Chun, J., Elkins, K.: explainable ai with gpt4 for story analysis and generation: a novel framework for diachronic sentiment analysis. Int. J. Digital Humanities 5(2), 507–532 (2023)
Costa, F., Ouyang, S., Dolog, P., Lawlor, A.: Automatic generation of natural language explanations. In: Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, pp. 1–2 (2018)
De Gennaro, M., Krumhuber, E.G., Lucas, G.: Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Front. Psychol. 10, 3061 (2020)
Donadello, I., Dragoni, M.: Bridging signals to natural language explanations with explanation graphs. In: Proceedings of the 2nd Italian Workshop on Explainable Artificial Intelligence (2021)
Dong, Q., Li, L., Dai, D., Zheng, C., Wu, Z., Chang, B., Sun, X., Xu, J., Sui, Z.: A survey for in-context learning. arXiv preprint arXiv:2301.00234 (2022)
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable ai and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 263–274 (2019)
Ghobakhloo, M., Ghobakhloo, M.: Design of a personalized recommender system using sentiment analysis in social media (case study: banking system). Soc. Netw. Anal. Min. 12(1), 84 (2022)
Goyani, M., Chaurasiya, N.: A review of movie recommendation system: Limitations, survey and challenges. ELCVIA: electronic letters on computer vision and image analysis 19(3), 0018–37 (2020)
Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Generating counterfactual explanations with natural language. In: ICML Workshop on Human Interpretability in Machine Learning, pp. 95–98 (2018)
Jiang, A.Q., et al.: Mixtral of experts. ArXiv abs/2401.04088 (2024)
Kokalj, E., Škrlj, B., Lavrač, N., Pollak, S., Robnik-Šikonja, M.: BERT meets shapley: extending SHAP explanations to transformer-based classifiers. In: Toivonen, H., Boggia, M. (eds.) Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation, pp. 16–21. Association for Computational Linguistics, Online, April 2021
Kuiper, O., van den Berg, M., van der Burgt, J., Leijnen, S.: Exploring explainable ai in the financial sector: Perspectives of banks and supervisory authorities. In: Artificial Intelligence and Machine Learning: 33rd Benelux Conference on Artificial Intelligence, BNAIC/Benelearn 2021, Esch-sur-Alzette, Luxembourg, November 10–12, 2021, Revised Selected Papers 33, pp. 105–119. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-93842-0_6
Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Neural Information Processing Systems (2017)
Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N.: Convxai: a system for multimodal interaction with any black-box explainer. Cogn. Comput. 15(2), 613–644 (2023)
Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N., Seveso, A.: Contrxt: generating contrastive explanations from any text classifier. Inf. Fusion 81, 103–115 (2022). https://doi.org/10.1016/j.inffus.2021.11.016
Malandri, L., Mercorio, F., Mezzanzanica, M., Nobani, N., Seveso, A., et al.: The good, the bad, and the explainer: a tool for contrastive explanations of text classifiers. In: IJCAI, pp. 5936–5939 (2022)
Malandri, L., Mercorio, F., Mezzanzanica, M., Seveso, A.: Model-contrastive explanations through symbolic reasoning. Decis. Support Syst. 176, 114040 (2024)
Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pp. 50–60 (1947)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26 (2013)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Miller, T.: Explainable ai is dead, long live explainable ai! hypothesis-driven decision support using evaluative ai. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023, pp. 333–342. Association for Computing Machinery, New York (2023).https://doi.org/10.1145/3593013.3594001, https://doi.org/10.1145/3593013.3594001
Muennighoff, N., Tazi, N., Magne, L., Reimers, N.: Mteb: massive text embedding benchmark. arXiv preprint arXiv:2210.07316 (2022).https://doi.org/10.48550/ARXIV.2210.07316
O’Hara, K.: Explainable ai and the philosophy and practice of explanation. Comput. Law Secur. Rev. 39, 105474 (2020)
Oyebode, O., Orji, R.: A hybrid recommender system for product sales in a banking environment. J. Banking Financial Technol. 4, 15–25 (2020)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Raghuwanshi, S.K., Pateriya, R.K.: Recommendation systems: techniques, challenges, application, and evaluation. In: Bansal, J.C., Das, K.N., Nagar, A., Deep, K., Ojha, A.K. (eds.) Soft Computing for Problem Solving. AISC, vol. 817, pp. 151–164. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1595-4_12
Rahutomo, F., Kitasuka, T., Aritsugi, M.: Semantic cosine similarity. In: The 7th International Student Conference on Advanced Science and Technology ICAST, vol. 4, p. 1 (2012)
Shapiro, S.S., Wilk, M.B.: An analysis of variance test for normality (complete samples). Biometrika 52(3/4), 591–611 (1965)
Sharaf, M., Hemdan, E.E.D., El-Sayed, A., El-Bahnasawy, N.A.: A survey on recommendation systems for financial services. Multimed. Tools Appl. 81(12), 16761–16781 (2022)
Slack, D., Krishna, S., Lakkaraju, H., Singh, S.: Explaining machine learning models with interactive natural language conversations using talktomodel. Nature Mach. Intell. 5(8), 873–883 (2023)
Smith, B., Linden, G.: Two decades of recommender systems at amazon.com. IEEE Internet Comput. 21(3), 12–18 (2017)
Sokol, K., Flach, P.: Limetree: Consistent and faithful surrogate explanations of multiple classes (2023)
Vaswani, A., et al.: Attention is all you need. Advances in neural information processing systems 30 (2017)
Wang, L., Yang, N., Huang, X., Yang, L., Majumder, R., Wei, F.: Multilingual e5 text embeddings: a technical report. arXiv preprint arXiv:2402.05672 (2024)
Wu, Z., Li, C., Cao, J., Ge, Y.: On scalability of association-rule-based recommendation: a unified distributed-computing framework. ACM Trans. Web (TWEB) 14(3), 1–21 (2020)
Zhang, Y., Chen, X., et al.: Explainable recommendation: a survey and new perspectives. Found. Trends Inf. Retrieval 14(1), 1–101 (2020)
Zhao, W.X., et al.: A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)
Zheng, L., et al.: Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems 36 (2024)
Acknowledgments
We extend our deepest gratitude to the Digital Solutions & Analytics office for their invaluable support and for establishing the gold standards used in this research. Special thanks to Valerio Lodola, Giulia Della Pedrina, Matteo Tribastone, and the Digital Business Partners Eugenia Ceresetti and Brunella Cutrera for facilitating interaction within banking structures.
Additionally, we would like to thank the Competence Center of AI for providing on-premises Gen AI services for this work. We appreciate Pierluigi Lacqua, Francesco Bonazzi, Stefania Piosso, and Claudia Berloco’s invaluable assistance and expertise.
Lastly, thanks to Marco Ditta, Andrea Cosentini, Maddalena Amoruso, and Mauro Pinto for their constant encouragement to explore innovations within Intesa Sanpaolo.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors declare that they have no relevant or material financial interests that relate to the research described in this paper. No funding was received for this study, and no other potential conflicts of interest exist.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Castelnovo, A. et al. (2024). Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2153. Springer, Cham. https://doi.org/10.1007/978-3-031-63787-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-63787-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63786-5
Online ISBN: 978-3-031-63787-2
eBook Packages: Computer ScienceComputer Science (R0)