Abstract
Recommendation systems have improved the accuracy of recommendations through the use of complex algorithms; however, users struggle to understand why the items are recommended and hence become anxious. Therefore, it is crucial to explain the reason for the recommended items to provide transparency and improve user satisfaction. Recent studies have adopted local interpretable model-agnostic explanations (LIME) as an interpretation model by treating the recommendation model as a black box; this approach is called a post-hoc approach. In this chapter, we propose a new method based on LIME to improve the model fidelity, i.e., the recall of the interpretation model to the recommendation model. Our idea is to select an optimal number of explainable features in the interpretation model instead of using complete features because the interpretation model becomes difficult to learn when the number of features increases. In addition, we propose a method to generate user-friendly explanations for users based on the features extracted by LIME. To the best of our knowledge, this study is the first one to provide a post-hoc explanation with subjective experiments involving users to confirm the effectiveness of the method. The experimental evaluation shows that our method outperforms the state-of-the-art method, named LIME-RS, with a 2.5%–2.7% higher model fidelity of top 50 recommended items. Furthermore, subjective evaluations conducted on 50 users for the generated explanations demonstrate that the proposed method is statistically superior to the baselines in terms of transparency, trust, and satisfaction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
marcotcr/lime, Github, https://github.com/marcotcr/lime.
- 2.
MovieLens 1M Dataset, GroupLens, https://grouplens.org/datasets/movielens/1m/.
References
Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspectives. Found. Trends Inf. Retr. 14(1), 1–101 (2020)
Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM conference on Computer supported cooperative work, pp. 241–250 (2000)
Abdollahi, B., Nasraoui, O.: Using explainability for constrained matrix factorization. In: Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 79–83 (2017)
Chen, C., Zhang, M., Liu, Y., Ma, S.: Neural attentional rating regression with review-level explanations. In: Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 1583–1592 (2018)
Peake, G., Wang, J.: Explanation mining: Post Hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2060–2069 (2018)
Nóbrega, C., Marinho, L.: Towards explaining recommendations through local surrogate models. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 1671–1678 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Zhu, F., Jiang, M., Qiu, Y., Sun, C., Wang, M.: RSLIME: an efficient feature importance analysis approach for industrial recommendation systems. In: Proceedings of the 2019 International Joint Conference on Neural Networks, pp. 1–6 (2019)
Morisawa, S., Manabe, T., Zamami, T., Yamana, H.: Proposal of recommendation reason presentation method in recommendation systems. DBSJ Jpn. J. 18(3), 1–8 (2020) (in Japanese)
Harper, F.M., Konstan, J.A.: The MovieLens datasets: history and context. ACM Trans. Interactive Intell. Syst. 5(4), 1–19 (2016)
Rendle, S.: Factorization machines. In: Proceedings of the 2010 IEEE International Conference on Data Mining, pp. 995–1000 (2010)
Bayer, I.: fastFM: a library for factorization machines. J. Mach. Learn. Res. 17, 1–5 (2016)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural. Inf. Process. Syst. 32, 1–12 (2019)
Chang, S., Harper, F.M., Terveen, L.G.: Crowd-Based personalized natural language explanations for recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 175–182 (2016)
Tintarev, N., Masthoff, J.: Explaining recommendations: design and evaluation. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 353–382. Springer, Boston (2015). https://doi.org/10.1007/978-1-4899-7637-6_10
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Morisawa, S., Yamana, H. (2021). Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features. In: Lawless, W.F., Llinas, J., Sofge, D.A., Mittu, R. (eds) Engineering Artificially Intelligent Systems. Lecture Notes in Computer Science(), vol 13000. Springer, Cham. https://doi.org/10.1007/978-3-030-89385-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-89385-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89384-2
Online ISBN: 978-3-030-89385-9
eBook Packages: Computer ScienceComputer Science (R0)