Skip to main content

Prototype-Guided Counterfactual Explanations via Variational Auto-encoder for Recommendation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track (ECML PKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14174))

  • 926 Accesses

Abstract

Counterfactual reasoning has recently achieved impressive performance in the explainability of recommendation. However, existing counterfactual explainable methods ignore the realism of explanations and consider only the sparsity and proximity of explanations. Moreover, the huge counterfactuals space causes a time-consuming search process. In this study, we propose Prototype-Guided Counterfactual Explanations (PGCE), a novel counterfactual explainable recommendation framework to overcome the above issues. At its core, PGCE leverages a variational auto-encoder generative model to constrain the modification of features to generate counterfactual instances that are consistent with the distribution of real data. Meanwhile, we constructed a contrastive prototype for each user in a low-dimensional latent space, which can guide the search direction towards the optimal candidate instance space, thus, speed up the search process. For evaluation, we compared our method with several state-of-the-art model-intrinsic methods on three real-world datasets, in addition to the latest counterfactual reasoning-based method. Extensive experiments show that our model is not only able to efficiently generate realistic counterfactual explanations but also achieve state-of-the-art performance on other popular explainability evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/evison/Sentires/.

  2. 2.

    https://nijianmo.github.io/amazon/.

References

  1. Akula, A.R., Wang, S., Zhu, S.C.: Cocox: generating conceptual and counterfactual explanations via fault-lines. In: Proceedings of the 34rd AAAI Conference on Artificial Intelligence, pp. 2594–2601 (2020)

    Google Scholar 

  2. Alvarez-Melis, D., Jaakkola, T.S.: A causal framework for explaining the predictions of black-box sequence-to-sequence models. In: Conference on Empirical Methods in Natural Language Processing, pp. 412–421 (2017)

    Google Scholar 

  3. Chen, J., Zhuang, F., Hong, X., Ao, X., Xie, X., He, Q.: Attention-driven factor model for explainable personalized recommendation. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 909–912 (2018)

    Google Scholar 

  4. Chen, T., Yin, H., Ye, G., Huang, Z., Wang, Y., Wang, M.: Try this instead: personalized and interpretable substitute recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 891–900 (2020)

    Google Scholar 

  5. Chen, X., et al.: Sequential recommendation with user memory networks. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 108–116 (2018)

    Google Scholar 

  6. Cheng, W., Shen, Y., Huang, L., Zhu, Y.: Incorporating interpretability into latent factor models via fast influence analysis. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 885–893 (2019)

    Google Scholar 

  7. Costa, F.S.D., Ouyang, S., Dolog, P., Lawlor, A.: Automatic generation of natural language explanations. In: Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, pp. 1–2 (2018)

    Google Scholar 

  8. Gao, C., Zheng, Y., Wang, W., Feng, F., He, X., Li, Y.: Causal inference in recommender systems: a survey and future directions. arXiv abs/2208.12397 (2022)

    Google Scholar 

  9. Ghazimatin, A., Balalau, O., Roy, R.S., Weikum, G.: Prince: provider-side interpretability with counterfactual explanations in recommender systems. In: Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 196–204 (2020)

    Google Scholar 

  10. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: International Conference on Machine Learning, pp. 2376–2384 (2019)

    Google Scholar 

  11. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp. 241–250 (2000)

    Google Scholar 

  12. Kaffes, V., Sacharidis, D., Giannopoulos, G.: Model-agnostic counterfactual explanations of recommendations. In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, pp. 280–285 (2021)

    Google Scholar 

  13. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  14. Li, C., Quan, C., Peng, L., Qi, Y., Deng, Y., Wu, L.: A capsule network for recommendation and explaining what you like and dislike. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 275–284 (2019)

    Google Scholar 

  15. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the 32rd AAAI Conference on Artificial Intelligence, pp. 3530–3537 (2018)

    Google Scholar 

  16. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM, 36–43 (2018)

    Google Scholar 

  17. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Conference on Neural Information Processing Systems, pp. 4768–4777 (2017)

    Google Scholar 

  18. Ma, W., et al.: Jointly learning explainable rules for recommendation with knowledge graph. In Proceedings of the 28th International Conference on World Wide Web, pp. 1210–1221 (2019)

    Google Scholar 

  19. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)

    Google Scholar 

  20. Ni, J., Li, J., McAuley, J.: Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 188–197 (2019)

    Google Scholar 

  21. Peake, G., Wang, J.: Explanation mining: post hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2060–2069 (2018)

    Google Scholar 

  22. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  23. Sarwar, B.M., Karypis, G., Konstan, J.A., Riedl, J.: Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th International Conference on World Wide Web, pp. 285–295 (2001)

    Google Scholar 

  24. Singh, J., Anand, A.: Posthoc interpretability of learning to rank models using secondary training data. arXiv abs/1806.11330 (2018)

    Google Scholar 

  25. Tan, J., Xu, S., Ge, Y., Li, Y., Chen, X., Zhang, Y.: Counterfactual explainable recommendation. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, pp. 1784–1793 (2021)

    Google Scholar 

  26. Tran, K.H., Ghazimatin, A., Saha Roy, R.: Counterfactual explanations for neural recommenders. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1627–1631 (2021)

    Google Scholar 

  27. Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, pp. 650–665 (2021)

    Google Scholar 

  28. Wang, N., Wang, H., Jia, Y., Yin, Y.: Explainable recommendation via multi-task learning in opinionated text data. In: Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 165–174 (2018)

    Google Scholar 

  29. Wang, X., Li, Q., Yu, D., Xu, G.: Reinforced path reasoning for counterfactual explainable recommendation. arXiv abs/2207.06674 (2022)

    Google Scholar 

  30. Wu, T., Ribeiro, M.T., Heer, J., Weld, D.S.: Polyjuice: generating counterfactuals for explaining, evaluating, and improving models. In: Annual Meeting of the Association for Computational Linguistics, pp. 6707–6723 (2021)

    Google Scholar 

  31. Xu, S., et al.: Learning causal explanations for recommendation. In: CEUR Workshop Proceedings (2021)

    Google Scholar 

  32. Yang, L., Kenny, E.M., Ng, T.L.J., Yang, Y., Smyth, B., Dong, R.: Generating plausible counterfactual explanations for deep transformers in financial text classification. In: International Conference on Computational Linguistics, pp. 6150–6160 (2020)

    Google Scholar 

  33. Zhang, Y., Chen, X., et al.: Explainable recommendation: a survey and new perspectives. Found. Trends Inf. Retr. 14(1), 1–101 (2020)

    Article  Google Scholar 

  34. Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th International ACM SIGIR conference on Research and development in information retrieval, pp. 83–92 (2014)

    Google Scholar 

  35. Zhang, Y., Zhang, H., Zhang, M., Liu, Y., Ma, S.: Do users rate or review?: Boost phrase-level sentiment labeling with review-level sentiment classification. In: Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1027–1030 (2014)

    Google Scholar 

  36. Zhang, Y., Xu, X., Zhou, H., Zhang, Y.: Distilling structured knowledge into embeddings for explainable and accurate recommendation. In: Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 735–743 (2020)

    Google Scholar 

  37. Zhong, J., Negre, E.: Shap-enhanced counterfactual explanations for recommendations. In: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, pp. 1365–1372 (2022)

    Google Scholar 

  38. Zhou, Y., Wang, H., He, J., Wang, H.: From intrinsic to counterfactual: on the explainability of contextualized recommender systems. arXiv abs/2110.14844 (2021)

    Google Scholar 

Download references

Acknowledgements.

This work is supported by the Project of Construction and Support for High-level Teaching Teams of Beijing Municipal Institutions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming He .

Editor information

Editors and Affiliations

Ethics declarations

Ethical Statement

Firstly, the experimental data were all obtained from the publicly desensitised Amazon Review Data, and therefore did not involve the collection, processing or inference of private personal information. Secondly, there is no potential use of our research work for the police or the military. Thirdly, this paper does not contain any studies with animals performed by any of the authors. Finally, informed consent was obtained from all individual participants included in the study. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, M., Wang, J., An, B., Wen, H. (2023). Prototype-Guided Counterfactual Explanations via Variational Auto-encoder for Recommendation. In: De Francisci Morales, G., Perlich, C., Ruchansky, N., Kourtellis, N., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14174. Springer, Cham. https://doi.org/10.1007/978-3-031-43427-3_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43427-3_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43426-6

  • Online ISBN: 978-3-031-43427-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics