Skip to main content

Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features

  • Chapter
  • First Online:
Engineering Artificially Intelligent Systems

Abstract

Recommendation systems have improved the accuracy of recommendations through the use of complex algorithms; however, users struggle to understand why the items are recommended and hence become anxious. Therefore, it is crucial to explain the reason for the recommended items to provide transparency and improve user satisfaction. Recent studies have adopted local interpretable model-agnostic explanations (LIME) as an interpretation model by treating the recommendation model as a black box; this approach is called a post-hoc approach. In this chapter, we propose a new method based on LIME to improve the model fidelity, i.e., the recall of the interpretation model to the recommendation model. Our idea is to select an optimal number of explainable features in the interpretation model instead of using complete features because the interpretation model becomes difficult to learn when the number of features increases. In addition, we propose a method to generate user-friendly explanations for users based on the features extracted by LIME. To the best of our knowledge, this study is the first one to provide a post-hoc explanation with subjective experiments involving users to confirm the effectiveness of the method. The experimental evaluation shows that our method outperforms the state-of-the-art method, named LIME-RS, with a 2.5%–2.7% higher model fidelity of top 50 recommended items. Furthermore, subjective evaluations conducted on 50 users for the generated explanations demonstrate that the proposed method is statistically superior to the baselines in terms of transparency, trust, and satisfaction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    marcotcr/lime, Github, https://github.com/marcotcr/lime.

  2. 2.

    MovieLens 1M Dataset, GroupLens, https://grouplens.org/datasets/movielens/1m/.

References

  1. Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspectives. Found. Trends Inf. Retr. 14(1), 1–101 (2020)

    Article  Google Scholar 

  2. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM conference on Computer supported cooperative work, pp. 241–250 (2000)

    Google Scholar 

  3. Abdollahi, B., Nasraoui, O.: Using explainability for constrained matrix factorization. In: Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 79–83 (2017)

    Google Scholar 

  4. Chen, C., Zhang, M., Liu, Y., Ma, S.: Neural attentional rating regression with review-level explanations. In: Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 1583–1592 (2018)

    Google Scholar 

  5. Peake, G., Wang, J.: Explanation mining: Post Hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2060–2069 (2018)

    Google Scholar 

  6. Nóbrega, C., Marinho, L.: Towards explaining recommendations through local surrogate models. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 1671–1678 (2019)

    Google Scholar 

  7. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  8. Zhu, F., Jiang, M., Qiu, Y., Sun, C., Wang, M.: RSLIME: an efficient feature importance analysis approach for industrial recommendation systems. In: Proceedings of the 2019 International Joint Conference on Neural Networks, pp. 1–6 (2019)

    Google Scholar 

  9. Morisawa, S., Manabe, T., Zamami, T., Yamana, H.: Proposal of recommendation reason presentation method in recommendation systems. DBSJ Jpn. J. 18(3), 1–8 (2020) (in Japanese)

    Google Scholar 

  10. Harper, F.M., Konstan, J.A.: The MovieLens datasets: history and context. ACM Trans. Interactive Intell. Syst. 5(4), 1–19 (2016)

    Article  Google Scholar 

  11. Rendle, S.: Factorization machines. In: Proceedings of the 2010 IEEE International Conference on Data Mining, pp. 995–1000 (2010)

    Google Scholar 

  12. Bayer, I.: fastFM: a library for factorization machines. J. Mach. Learn. Res. 17, 1–5 (2016)

    MATH  Google Scholar 

  13. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural. Inf. Process. Syst. 32, 1–12 (2019)

    Google Scholar 

  14. Chang, S., Harper, F.M., Terveen, L.G.: Crowd-Based personalized natural language explanations for recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 175–182 (2016)

    Google Scholar 

  15. Tintarev, N., Masthoff, J.: Explaining recommendations: design and evaluation. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 353–382. Springer, Boston (2015). https://doi.org/10.1007/978-1-4899-7637-6_10

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shun Morisawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Morisawa, S., Yamana, H. (2021). Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features. In: Lawless, W.F., Llinas, J., Sofge, D.A., Mittu, R. (eds) Engineering Artificially Intelligent Systems. Lecture Notes in Computer Science(), vol 13000. Springer, Cham. https://doi.org/10.1007/978-3-030-89385-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89385-9_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89384-2

  • Online ISBN: 978-3-030-89385-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics