Abstract
Recently, counterfactual explanation models have shown impressive performance in adding explanations to recommendation systems. Despite their effectiveness, most of these models neglect the fact that not all aspects are equally important when users decide to purchase different items. As a result, the explanations generated may not reflect the users’ actual preferences. Furthermore, these models typically rely on external tools to extract aspect-level representations, making the model’s explainability and recommendation performance are highly dependent on external tools. This study addresses these research gaps by proposing a co-attention-based fine-grained counterfactual explanation model that uses co-attention and aspect representation learning to directly capture user preferences toward different items for recommendation and explanation. The superiority of the proposed model is demonstrated through extensive experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, T., Yin, H., Ye, G., Huang, Z., Wang, Y., Wang, M.: Try this instead: personalized and interpretable substitute recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 891–900 (2020)
Cheng, Z., Chang, X., Zhu, L., Kanjirathinkal, R.C., Kankanhalli, M.: MMALFM: explainable recommendation by leveraging reviews and images. ACM Trans. Ins. Syst. (TOIS) 37(2), 1–28 (2019)
Chin, J.Y., Zhao, K., Joty, S., Cong, G.: ANR: aspect-based neural recommender. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 147–156 (2018)
Duong, T.D., Li, Q., Xu, G.: Prototype-based counterfactual explanation for causal classification. arXiv preprint arXiv:2105.00703 (2021)
Ghazimatin, A., Balalau, O., Saha Roy, R., Weikum, G.: PRINCE: provider-side interpretability with counterfactual explanations in recommender systems. In: Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 196–204 (2020)
Peake, G., Wang, J.: Explanation mining: post hoc interpretability of latent factor models for recommendation systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2060–2069 (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?” Explaining the Predictions of any Classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Tan, J., Xu, S., Ge, Y., Li, Y., Chen, X., Zhang, Y.: Counterfactual explainable recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 1784–1793. CIKM 2021, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3459637.3482420
Tran, K.H., Ghazimatin, A., Saha Roy, R.: Counterfactual explanations for neural recommenders. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1627–1631 (2021)
Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596 (2020)
Wang, X., Chen, Y., Yang, J., Wu, L., Wu, Z., Xie, X.: A reinforcement learning framework for explainable recommendation. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 587–596. IEEE (2018)
Xiong, K., et al.: Counterfactual review-based recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 2231–2240 (2021)
Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 83–92 (2014)
Zhou, Y., Wang, H., He, J., Wang, H.: From intrinsic to counterfactual: on the explainability of contextualized recommender systems. arXiv preprint arXiv:2110.14844 (2021)
Zhou, Y., et al.: Explainable hyperbolic temporal point process for user-item interaction sequence generation. ACM Trans. Inf. Syst. 41, 1–26 (2022)
Acknowledgements
This research is supported by an Australian Government Research Training Program scholarship, and the collaboration is partially supported by one SRG grant in the University of Macau.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xia, H., Li, Q., Wang, Z., Li, G. (2023). Toward Explainable Recommendation via Counterfactual Reasoning. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13937. Springer, Cham. https://doi.org/10.1007/978-3-031-33380-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-33380-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33379-8
Online ISBN: 978-3-031-33380-4
eBook Packages: Computer ScienceComputer Science (R0)