ABSTRACT
The literature on explainable recommendations is already rich. In this paper, we aim to shed light on an aspect that remains under-explored in this area of research, namely providing personalized explanations. To address this gap, we developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations with varying levels of detail to meet the demands of different types of end-users. The results of a preliminary qualitative user study demonstrated potential benefits in terms of user satisfaction with the explainable recommender system. Our work would contribute to the literature on explainable recommendation by exploring the potential of on-demand personalized explanations, and contribute to the practice by offering suggestions for the design and appropriate use of personalized explanation interfaces in recommender systems.
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019).Google Scholar
- Shuo Chang, F Maxwell Harper, and Loren Gilbert Terveen. 2016. Crowd-based personalized natural language explanations for recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. 175–182.Google ScholarDigital Library
- Tim Donkers, Benedikt Loepp, and Jürgen Ziegler. 2018. Explaining Recommendations by Means of User Reviews.. In IUI Workshops.Google Scholar
- Haiyan Fan and Marshall Scott Poole. 2006. What is personalization? Perspectives on the design and implementation of personalization in information systems. Journal of Organizational Computing and Electronic Commerce 16, 3-4(2006), 179–202.Google ScholarCross Ref
- Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367–382.Google ScholarDigital Library
- Mouadh Guesmi, mohamed Amine Chatti, and Arham Muslim. 2020. A Review of Explanatory Visualizations in Recommender Systems. In Companion Proceedings 10th International Conference on Learning Analytics and Knowledge (LAK20). 480–491.Google Scholar
- Alexander Jung and Pedro HJ Nardelli. 2020. An information-theoretic approach to personalized explainable machine learning. IEEE Signal Processing Letters 27 (2020), 825–829.Google ScholarCross Ref
- Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 379–390.Google ScholarDigital Library
- Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2020. Generating and Understanding Personalized Explanations in Hybrid Recommender Systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4(2020), 1–40.Google ScholarDigital Library
- NIklas Kuhl, Jodie Lobana, and Christian Meske. 2020. Do you comply with AI?–Personalized explanations of learning algorithms and their impact on employees’ compliance behavior. arXiv preprint arXiv:2002.08777(2020).Google Scholar
- Yichao Lu, Ruihai Dong, and Barry Smyth. 2018. Why I like it: multi-task learning for recommendation and explanation. In Proceedings of the 12th ACM Conference on Recommender Systems. 4–12.Google ScholarDigital Library
- James McInerney, Benjamin Lacker, Samantha Hansen, Karl Higley, Hugues Bouchard, Alois Gruson, and Rishabh Mehrotra. 2018. Explore, exploit, and explain: personalizing explainable recommendations with bandits. In Proceedings of the 12th ACM conference on recommender systems. 31–39.Google ScholarDigital Library
- Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 397–407.Google ScholarDigital Library
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.Google ScholarCross Ref
- Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv (2018), arXiv–1811.Google Scholar
- Cataldo Musto, Fedelucio Narducci, Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. 2016. ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In Proceedings of the 10th ACM Conference on Recommender Systems. 151–154.Google ScholarDigital Library
- Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3-5 (2017), 393–444.Google ScholarDigital Library
- Johanes Schneider and Joshua Handali. 2019. Personalized explanation in machine learning: A conceptualization. arXiv preprint arXiv:1901.00770(2019).Google Scholar
- Martin Svrcek, Michal Kompan, and Maria Bielikova. 2019. Towards understandable personalized recommendations: Hybrid explanations. Computer Science and Information Systems 16, 1 (2019), 179–203.Google ScholarCross Ref
- Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 399–439.Google ScholarDigital Library
- Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender systems handbook. Springer, 353–382.Google Scholar
- Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: A survey and new perspectives. arXiv preprint arXiv:1804.11192(2018).Google Scholar
- Ruijing Zhao, Izak Benbasat, and Hasan Cavusoglu. 2019. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems.Google Scholar
Index Terms
- On-demand Personalized Explanation for Transparent Recommendation
Recommendations
Transparent, Scrutable and Explainable User Models for Personalized Recommendation
SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalMost recommender systems base their recommendations on implicit or explicit item-level feedback provided by users. These item ratings are combined into a complex user model, which then predicts the suitability of other items. While effective, such ...
Crowd-Based Personalized Natural Language Explanations for Recommendations
RecSys '16: Proceedings of the 10th ACM Conference on Recommender SystemsExplanations are important for users to make decisions on whether to take recommendations. However, algorithm generated explanations can be overly simplistic and unconvincing. We believe that humans can overcome these limitations. Inspired by how people ...
Personalized news recommendation based on click behavior
IUI '10: Proceedings of the 15th international conference on Intelligent user interfacesOnline news reading has become very popular as the web provides access to news articles from millions of sources around the world. A key challenge of news websites is to help users find the articles that are interesting to read. In this paper, we ...
Comments