Elsevier

Information Sciences

Volume 287, 10 December 2014, Pages 90-108
Information Sciences

Learning from explanations in recommender systems

https://doi.org/10.1016/j.ins.2014.07.031Get rights and content

Abstract

Although recommender systems are extremely useful when it comes to recommending new products to users, it is important that these applications explain their recommendations so that users can consider and trust them. There is also another reason: by analyzing why the system recommends a particular item or proposes a certain rating, the user might also consider the quality of the recommendation and, if appropriate, change its predicted value. On this basis, this paper presents a new technique to improve recommendations based on a series of explanations that should be given when various already-known items are recommended. To summarize, the aim of our proposal is to learn a regression model from the information presented in these explanations and, where appropriate, use this model to change the recommendation for a target item. In order to test this approach, we experimented with the MovieLens data set. A number of lessons can be learned: firstly, it is possible to learn from a set of explanations, although this is highly user-dependent; and secondly, we can use an automatic procedure to analyze the role of the different features presented in an explanation. We consider these results to be interesting and to validate our novel approach.

Introduction

Ever since they were first conceived in the last decade of the 20th century [10], [32], [33], recommender systems (RS) have been used in many different areas. There are currently a large number of RSs on Internet sites such as Amazon, Netflix or Last.fm. These systems are designed to learn from a user’s behavior to discover their preferences and either help them find what they are specifically looking for or what they might find useful in a vast amount of information.

Despite the popularity of such systems, there are various reasons why users might feel uneasy about using them and/or relying on their recommendations. These systems are usually seen as black boxes in which there is no other choice than to trust their recommendations [14]. Much research has been conducted to focus on this problem and among the proposed solutions we focus our attention on the use of explanation facilities which have also been considered as a critical function for recommender systems [8], [9], [20], [36]. These facilities provide users with an explanation of the rationale behind such recommendations, enabling them to better understand the workings of the system. Tintarev and Masthoff [41] list a number of reasons why recommender systems should provide such an explanation and these include:

  • 1.

    Transparency (illustrates how a recommendation is computed).

  • 2.

    Trust (increases user confidence).

  • 3.

    Effectiveness (helps the user make a good decision).

  • 4.

    Scrutability (allows users to tell the system it is wrong).

An algorithm’s ability to explain both recommendation results and the explanation itself strongly depends on the recommender model used, more especially so in the case of memory-based approaches [36] which are based on the similarities extracted from known data and which can serve as the basis for explaining recommendation results. In this paper, we shall assume a nearest-neighborhood-based RS in order to predict the rating for a target item. In this recommendation paradigm, there are two good alternatives for explaining recommendations [15]: (i) the use of a bar chart histogram which explains that “the system suggests 3 stars because it has been rated by other similar users as …”; and (ii), based on the system’s previous performance, i.e.“The system correctly recommended for you 80% of the time”. This second alternative is valuable for increasing user trust but does not fulfill any of the other criteria.

Focusing on the first more informative alternative, this partly explains how the predicted rating is computed (transparency), for instance by combining the ratings into a weighted average prediction. In this case, we can obtain several explanations for the same recommended rating, let us say 3. If we assume that there are five neighbors (who each contribute equally to the prediction), this rating can either be obtained when the suggested ratings are {1,2,3,4,5} or when the ratings are {3,3,3,3,3}. Users might find both explanations useful but while the user is unsure of the quality of the prediction in the first, the explanation reinforces the recommendation in the second (this is related to trust). In the same way, we can consider a situation where the system predicts the value 4 because the neighbors rated the target item as {1,5,5,5,5}. The information given in the explanation can then be used to change the prediction by modifying the rating to 5 (fact related to scrutability).

In our previous research [4], we found that having observed an explanation, users often modified their proposed rating (around 35% of the time). Moreover, better predictions about the quality of the proposed ratings were obtained once a large number of explanations had been observed. Certain learned processes must exist in the user’s mind to cause them to act on the prediction. In this paper, we talk about such an action in terms of trusting, mistrusting or even possibly changing the predicted rating. This situation underlies our research proposal, which can be formulated as the following question: Would it be possible to learn from a set of explanations? If the answer is “yes”, we must explore how the learned information can be used to improve recommendations. With the aim of learning from explanations, we propose the use of a machine-learning [13] algorithm that is capable of inducing general rules about the user’s actions from a set of observed instances.

In order to obtain the set of training instances, a first approach might be to gather the required information from user experience of the system, by either explicitly or implicitly analyzing their actions over the observed explanations. This task involves various problems: (a) user burden: we need to know what action the user takes after observing the explanation for each recommended item even when they have no interest in it; (b) dependence on the recommendation strategy: when considering a neighborhood-based RS, it could be that the set of neighbors used to obtain the predictions (and their associated explanation) varies with the recommended items; and (c) the cold-start problem: we need enough feedback to start the learning process.

In order to solve this problem, we propose that it be tackled from a different perspective: let us consider a given (unobserved) target item, It, where N is the set of neighbors selected to compute the pair prediction/explanation. We will use this neighborhood to obtain a prediction for all the items rated by the user, let us say R={I1,,Ik}, and their associated explanations. For these items, since we know the real and predicted ratings, we can automatically presume what the user’s actions should be, such as, for instance, to trust or increase the predicted value. Using this approach, we shall obtain a corpus of tuples 〈predictions, explanations and actions〉 that can be used as training data to induce the particular action for the target item. We shall say that this approach is based on previous explanations.

We would like to stress that the main focus of our paper is to determine whether the general idea of learning from explanations may work. As a result, we conclude that explanations represent a valuable source of information that might be exploited since this is (as far as we know) the first attempt to explore their impact on a recommending environment. By means of this approach, we should explore the costs and benefits of scrutability for improving recommendations, something which is considered to be an open problem in the field [21], [23], [42]. Although our approach focuses on a neighborhood-based RS, it could also be applied to other recommending strategies, such as those using matrix factor [22] or additional informational elements such as the quality of the items [40] or social-based knowledge [35].

This paper is organized as follows: the next section discusses the reasons for our approach; Section 3 presents related work on recommender systems; Section 4 describes our approach; Section 5.1 outlines empirical experimentation with MovieLens data sets; and finally Section 6 presents our concluding remarks.

Section snippets

Explaining by using predictions for already known items

As we mentioned previously, our research stems from the use of predictions on a set of observed items which serve as the basis for explanation facilities [4]. More specifically, this research considered a collaborative RS to predict how an active user, a, might rate a target item, t,rˆ(a,t). Following a user-based approach, the predictions are obtained using a weighted combination of the suggestion given by the nearest N neighbors, i.e.rˆ(a,t)=vNw(v)sg(v,t),where w(v) represents the

Related work

The first study to explore explanation in recommender systems as an important research problem was conducted by Herlocker et al. [15]. In this paper, the authors conclude that explanation can persuade users to have greater confidence in the system. The authors conducted an experimental study to address various hypotheses relating to the explanation, proposing 21 different interfaces, playing with certain elements such as a neighbor’s rating, overall previous performance, rating similarity or

Learning from explanation

In our approach, we shall attempt to learn using information gathered from a set of explanations. The given explanation, however, strongly depends on the recommendation model used. Several recommending strategies have been published [33], but in this paper we shall focus on a nearest-neighborhood-based collaborative filtering algorithm which computes the predictions by considering how similar users rated a target item. As mentioned previously, not only do we have an RS that computes a

Evaluation

The final objective is to show whether it is possible to determine the error in a recommendation by analyzing data gathered from an explanation. This will help us conclude that explanations can be considered as a valuable source of knowledge that might be exploited in the recommending processes. In order to tackle this objective, we shall attempt to answer the following research questions:

  • Q1: Is our approach a one-size-fits-all solution? When designing our explanation interface, we realized that

Concluding remarks and future lines of work

In this paper, we have shown that explanations can be considered as a valuable source of knowledge that can be exploited by an RS. This paper offers a first insight into this topic and, more specifically, our research focuses on using neighbors’ opinions about items previously rated by the user in the explanations. We demonstrate that relevant data can be gathered from this type of explanation, that we can use machine-learning strategies to change this data into knowledge (in terms of rules)

Acknowledgments

This work was jointly supported by the Spanish Ministerio de Educación y Ciencia and Junta de Andalucía, under projects TIN2011-28538-C02-02 and Excellence Project TIC-04526, respectively, and also the AECID fellowship program.

References (44)

  • D. Cosley et al.

    Is seeing believing? How recommender system interfaces affect users’ opinions

  • H. Cramer et al.

    The effects of transparency on trust in and acceptance of a content-based art recommender

    User Model. User-Adapt. Interact.

    (2008)
  • G. Friedrich et al.

    A taxonomy for generating explanations in recommender systems

    AI Mag.

    (2011)
  • D. Goldberg et al.

    Using collaborative filtering to weave an information tapestry

    Commun. ACM

    (1992)
  • A. Gunawardana et al.

    A survey of accuracy evaluation metrics of recommendation tasks

    J. Mach. Learn. Res.

    (2009)
  • M. Hall et al.

    The Weka data mining software: an update

    SIGKDD Explor. Newsl.

    (2009)
  • J. Han et al.

    Data Mining: Concepts and Techniques

    (2011)
  • J.L. Herlocker, Position statement – explanations in recommender systems, in: CHI’99 Workshop, Interacting with...
  • J.L. Herlocker et al.

    Explaining collaborative filtering recommendations

  • J.L. Herlocker et al.

    An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms

    Inf. Retrieval

    (2005)
  • J.L. Herlocker et al.

    Evaluating collaborative filtering recommender systems

    ACM Trans. Inf. Syst.

    (2004)
  • B. Knijnenburg et al.

    Explaining the user experience of recommender systems

    User Model. User-Adapt. Interact.

    (2012)
  • Cited by (16)

    • O<sup>3</sup>ERS: An explainable recommendation system with online learning, online recommendation, and online explanation

      2021, Information Sciences
      Citation Excerpt :

      Some researchers further exploit these contents to link similar users and items. They studied user-based and item-based explanations, which find a set of similar users or similar items, for the target user or recommended item and explain that the recommendation is based on such similarities [10,16,30]. The problems of user-based collaborative filtering explanations include trustworthiness and privacy concerns, because the target user may have no idea about other users or items who have ‘similar contents’.

    • ReEx: An integrated architecture for preference model representation and explanation

      2020, Expert Systems with Applications
      Citation Excerpt :

      To help the user understand the recommendations, researchers have proposed adding explanation facilities to recommender systems (Cleger, Fernández-Luna, & Huete, 2014). A number of benefits of recommendation explanations have been suggested by researchers in the literature, such as efficiency and effectiveness by enabling better and faster decisions, trust, user satisfaction, persuasiveness, transparency, and scrutability (Cleger et al., 2014; Zhang et al., 2014; Gedikli, Jannach, & Ge, 2014; Tintarev & Masthoff, 2015). Explaining the recommendations is particularly important in high investment product domains (e.g. digital cameras, laptops), where consumers try to avoid financial risk and appreciate help with making good decisions (Chen & Wang, 2017).

    • Linked open data-based explanations for transparent recommender systems

      2019, International Journal of Human Computer Studies
      Citation Excerpt :

      In that work, most of the users viewed the additional explanation as having a positive impact on the search results. Cleger et al. (2014) use neighbors’ opinions about items previously rated by the user to learn a regression model from the given explanations when items are recommended. This model is used to change the recommendation for a target item.

    • Deviation-based neighborhood model for context-aware QoS prediction of cloud and IoT services

      2017, Future Generation Computer Systems
      Citation Excerpt :

      However, matrix factorization is always uncertain, where the learned latent factors are unexplainable, thus resulting in difficulty as explain the predicting results for users. It is very important that recommender systems provide explanations for their recommendations so that users can consider and trust them [17]. By the same token, users certainly hope to be able to get believable explanations for the QoS predictions provided by a cloud service recommendation system.

    View all citing articles on Scopus
    1

    Visiting Academic at the Universidade do Estado do Amazonas, Brazil.

    View full text