ABSTRACT
Recommender systems are intended to assist consumers by making choices from a large scope of items. While most recommender research focuses on improving the accuracy of recommender algorithms, this paper stresses the role of explanations for recommended items for gaining acceptance and trust. Specifically, we present a method which is capable of providing detailed explanations of recommendations while exhibiting reasonable prediction accuracy. The method models the users' ratings as a function of their utility part-worths for those item attributes which influence the users' evaluation behavior, with part-worth being estimated through a set of auxiliary regressions and constrained optimization of their results. We provide evidence that under certain conditions the proposed method is superior to established recommender approaches not only regarding its ability to provide detailed explanations but also in terms of prediction accuracy. We further show that a hybrid recommendation algorithm can rely on the content-based component for a majority of the users, switching to collaborative recommendation only for about one third of the user base.
- }}Aksoy, L., Bloom, P. N., Lurie, N. H., and Cooil, B. Should Recommendation Agents Think Like People? Journal of Service Research 8, 4 (2006), 297--315.Google ScholarCross Ref
- }}Ariely, D. Controlling the Information Flow: Effects on Consumers' Decision Making and Preferences. Journal of Consumer Research 27, 2 (2000), 233--248.Google ScholarCross Ref
- }}Austin, B. A. Immediate Seating - A Look at Movie Audiences. Belmont, California, Wadsworth Inc. 1989.Google Scholar
- }}Bao, X., Bergman, L., and Thompson, R. Stacking Recommendation Engines with Additional Meta-features. RecSys '09, (2009), 109--116. Google ScholarDigital Library
- }}Burke, R. Hybrid Web Recommender Systems. In P. Brusilovsky, A. Kobsa and W. Nejdl, The Adaptive Web. Springer, Berlin, 2007, 377--408. Google ScholarDigital Library
- }}Cramer, H., Evers, V., Ramlal, S., et al. The Effects of Transparency on Trust in and Acceptance of a Content-Based Art Recommender. User Modeling and User-Adapted Interaction 18, 5 (2008), 455--496. Google ScholarDigital Library
- }}Elliott, G. and Timmermann, A. Optimal forecast combinations under general loss functions and forecast error distributions. Journal of Econometrics 122, 1 (2004), 47--79.Google ScholarCross Ref
- }}Fildes, R. and Ord, K. Forecasting Competitions: Their Role in Improving Forecasting Practice and Research. In M. P. Clements and D. F. Hendry, A Companion to Economic Forecasting. Blackwell Publishers, Oxford, 2001, 322--353.Google Scholar
- }}Funk, S. Netflix Update: Try This at Home. 2006. http://sifter.org/~simon/journal/20061211.html.Google Scholar
- }}Gunawardana, A. and Meek, C. A Unified Approach to Building Hybrid Recommender Systems. RecSys '09, (2009), 117--124. Google ScholarDigital Library
- }}Hennig-Thurau, T., Houston, M. B., and Walsh, G. The Differing Roles of Success Drivers Across Sequential Channels: An Application to the Motion Picture Industry. Journal of the Academy of Marketing Science 34, 4 (2006), 559--575.Google ScholarCross Ref
- }}Herlocker, J., Konstan, J., Borchers, A., and Riedl, J. An Algorithmic Framework for Performing Collaborative Filtering. SIGIR, ACM (1999), 230--237. Google ScholarDigital Library
- }}Koren, Y. Collaborative filtering with temporal dynamics. KDD'09, (2009), 447--456. Google ScholarDigital Library
- }}McNee, S. M., Riedl, J., and Konstan, J. A. Being Accurate is Not Enough: How Accuracy Metrics have hurt Recommender Systems. CHI'06, (2006), 1097--1101. Google ScholarDigital Library
- }}McSherry, D. Explanation in Recommender Systems. Artificial Intelligence Review 24, 2 (2005), 179--197. Google ScholarDigital Library
- }}O'Donovan, J. and Smyth, B. Trust in Recommender Systems. IUI '05, (2005), 167--174. Google ScholarDigital Library
- }}Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 2007. Google ScholarDigital Library
- }}Sarwar, B., Karypis, G., Konstan, J., and Riedl, J. Analysis of Recommendation Algorithms for E-commerce. Proceedings of the 2nd ACM conference on Electronic commerce, (2000), 158--167. Google ScholarDigital Library
- }}Sinha, R. and Swearingen, K. The Role of Transparency in Recommender Systems. Conference on Human Factors in Computing Systems, (2002), 830--831. Google ScholarDigital Library
- }}Stock, J. H. and Watson, M. W. Forecasting Inflation. Journal of Monetary Economics 44, 2 (1999), 293--335.Google ScholarCross Ref
- }}Symeonidis, P., Nanopoulos, A., and Manolopoulos, Y. MoviExplain: A Recommender System with Explanations. RecSys'09, (2009), 317--320. Google ScholarDigital Library
- }}Tintarev, N. and Masthoff, J. A Survey of Explanations in Recommender Systems. ICDE'07, (2007), 1--10. Google ScholarDigital Library
- }}Ying, Y., Feinberg, F., and Wedel, M. Leveraging Missing Ratings to Improve Online Recommendation Systems. Journal of Marketing Research 43, 3 (2006), 355--365.Google ScholarCross Ref
Index Terms
- Increasing consumers' understanding of recommender results: a preference-based hybrid algorithm with strong explanatory power
Recommendations
User Personality and User Satisfaction with Recommender Systems
In this study, we show that individual users' preferences for the level of diversity, popularity, and serendipity in recommendation lists cannot be inferred from their ratings alone. We demonstrate that we can extract strong signals about individual ...
Effects of Personalized and Aggregate Top-N Recommendation Lists on User Preference Ratings
Prior research has shown a robust effect of personalized product recommendations on user preference judgments for items. Specifically, the display of system-predicted preference ratings as item recommendations has been shown in multiple studies to bias ...
What We Evaluate When We Evaluate Recommender Systems: Understanding Recommender Systems’ Performance using Item Response Theory
RecSys '23: Proceedings of the 17th ACM Conference on Recommender SystemsCurrent practices in offline evaluation use rank-based metrics to measure the quality of top-n recommendation lists. This approach has practical benefits as it centres assessment on the output of the recommender system and, therefore, measures ...
Comments