Elsevier

Knowledge-Based Systems

Volume 20, Issue 6, August 2007, Pages 542-556
Knowledge-Based Systems

Trust-inspiring explanation interfaces for recommender systems

https://doi.org/10.1016/j.knosys.2007.04.004Get rights and content

Abstract

A recommender system’s ability to establish trust with users and convince them of its recommendations, such as which camera or PC to purchase, is a crucial design factor especially for e-commerce environments. This observation led us to build a trust model for recommender agents with a focus on the agent’s trustworthiness as derived from the user’s perception of its competence and especially its ability to explain the recommended results. We present in this article new results of our work in developing design principles and algorithms for constructing explanation interfaces. We show the effectiveness of these principles via a significant-scale user study in which we compared an interface developed based on these principles with a traditional one. The new interface, called the organization interface where results are grouped according to their tradeoff properties, is shown to be significantly more effective in building user trust than the traditional approach. Users perceive it more capable and efficient in assisting them to make decisions, and they are more likely to return to the interface. We therefore recommend designers to build trust-inspiring interfaces due to their high likelihood to increase users’ intention to save cognitive effort and the intention to return to the recommender system.

Introduction

The importance of explanation interfaces in providing system transparency and thus increasing user acceptance has been well recognized in a number of fields: expert systems [11], medical decision support systems [2], intelligent tutoring systems [29], and data exploration systems [4]. Being able to effectively explain results is also essential for product recommender systems. When users face the difficulty of choosing the right product to purchase, the ability to convince them to buy a proposed item is an important goal of any recommender system in e-commerce environments. A number of researchers have started exploring the potential benefits of explanation interfaces in a number of directions.

Case-based reasoning recommender systems that can explain their recommendations include ExpertClerk [27], Dynamic critiquing systems [12], FirstCase and TopCase [16], [17]. ExpertClerk explained the selling point of each sample in terms of its difference from two other contrasting samples. In a similar way, FirstCase can explain why one case is more highly recommended than another by highlighting the benefits it offers and also the compromises it involves with respect to the user’s preferences. In TopCase, the relevance of any question the user is asked can be explained in terms of its ability to discriminate between competing cases. McCarthy et al. [12] propose to educate users about product knowledge by explaining what products do exist instead of justifying why the system failed to produce a satisfactory outcome. This is similar to the goal of resolving users’ preference conflict by providing them with partially satisfied solutions [25]. Some consumer decision support systems with explanation interfaces can be found on commercial websites such as Logical Decisions (www.logicaldecisions.com), Active Decisions (www.activedecisions.com), and SmartSort (shopping.yahoo.com/smartsort).

A number of researchers also reported results from evaluating explanation interfaces with real users. Herlocker et al. [10] addressed explanation interfaces for recommender systems using ACF (automated collaborative filtering) techniques, and demonstrated that a histogram with grouping of neighbor ratings was the most compelling explanation component among the studied users. They maintain that providing explanations can improve the acceptance of ACF systems and potentially improve users’ filtering performance. Sinha and Swearingen [28] found that users like and feel more confident about recommendations that they perceive as transparent.

So far, previous work on explanation interfaces has not explored its potential for building users’ trust in recommender agents. Trust is seen as a long term relationship between a user and the organization that the recommender system represents. Therefore, trust issues are critical to study especially for recommender systems used in e-commerce where the traditional salesperson, and subsequent relationship, is replaced by a product recommender agent. Studies show that customer trust is positively associated with customers’ intention to transact, purchase a product, and return to the website [8]. These results have mainly been derived from online shops’ ability to ensure security, privacy and reputation, i.e., the integrity and benevolence aspects of trust constructs, and less from a system’s competence such as a recommender system’s ability to explain its result. These open issues led us to develop a trust model for building user trust in recommender agents, especially focusing on the role of the competence construct. We pursue our research work in four main areas: (1) we investigate the inherent benefits of using explanation for trust building in recommender systems; (2) we examine whether competence-inspired trust provides the same trust-related benefits as other trust constructs, for example benevolence and integrity; (3) we seek promising areas to investigate interface design issues for building user trust, and (4) we develop sound principles and algorithms for building such interfaces. In the first stage of this work, we have developed a trust model for recommender systems1 and evaluated its validity through a carefully constructed user survey [5]. We established that competence perception is an essential contribution to trust building and provides trust-induced benefits such as intention to return. As the second part of this work, it is therefore essential to concentrate on those design aspects of an interface that help the system increase its perceived competence. The work reported in this article emphasizes design principles and algorithms for generating competence-inspiring interfaces and testing these principles in empirical studies.

This article is organized as follows: Section 2 summarizes our previous work in developing a trust model for recommender systems and some results from a qualitative survey, which identified explanation interfaces as one of the most promising areas to address issues for building user trust; Section 3 describes a set of general principles derived from an in-depth examination of various design dimensions for constructing explanation interfaces, followed by an algorithm that we developed to optimize these principles; Section 4 presents a research model which explains more clearly how we developed the hypotheses on the main benefits of explanation interfaces, and discusses the design and implementation of a significant-scale empirical study to validate these hypotheses; Section 5 reports results from that study indicating that the organization-based explanation, where recommendations are organized into different categories according to their tradeoff properties relative to the top candidate, is more likely to inspire users’ trust, given the fact users perceive it more capable and efficient in helping them interpret and process decision information (i.e., effort saving), and are more likely to return to it; Section 6 discusses the implication of this work to related work in this area, followed by the conclusion and future work.

The present article provides a number of follow-up results and more analytical detail to our earlier paper [24]. To better explain how the organization interface algorithm works in action, we use a step-by-step data flow diagram in Section 3.2 (organization algorithm) to illustrate the generation of such interfaces (see Fig. 1). Section 4.1 explains how we establish the hypotheses and their inter-relationships to be tested in the empirical study. More discussions are given on the design of user tasks and their rationale (Section 4.3). Section 5.2 is added to include new results from path coefficient analyses to show the important causal relationships of trust constructs. Several important conclusions regarding user trust and its benefits such as users’ intention to save cognitive effort are derived. To offer some explanations on why users prefer the organization based interfaces, we analyzed and have included users’ actual comments in Section 5.3. Finally, we include more detailed discussion of the future work in Section 7 (Conclusion), particularly addressing the long-term trust issues and how trust relates to other issues such as user control and privacy.

Section snippets

Trust model and explanation interfaces

This section summarizes our earlier work and results on constructing a trust model for recommender systems [5]. It is intended to offer an overview of the overall research agenda and a roadmap identifying the most promising areas for investigating design issues for trust-inspiring interfaces.

Organization-based explanation interfaces

Traditional product search and recommender systems present a set of top-k alternatives to users. We call this style of display the k-best interface. Because these alternatives are calculated based on users’ revealed preferences (directly or indirectly), these top-k items may not provide for diversity. Recently the need to include more diversified items in the result list has been recognized. Methods have been developed to address users’ potentially unstated preferences [7], [22], cover topic

User evaluation

In order to understand whether the organization interface based on the design principles and algorithm is a more effective way to explain recommendations, we conducted a significant-scale empirical study that compared our organization interface with the traditional “why” interface in a within-subjects design. The main objective was to measure the difference in users’ trust level in terms of the perceived competence and trusting intentions (the intention to save effort and to return) in the two

Results analysis

Results were analyzed for each measured variable using the paired samples t-test.

Implication to related work

Results from our empirical study strongly support a current trend in displaying a diverse set of recommendations rather than the k-best matching ones. McGinty and Smyth [14] maintain that showing diverse items can reduce the recommendation cycles. McSherry [16] advocates that the displayed items should cover all possible tradeoffs that the user may be prepared to accept. Faltings et al. [7] propose to show products that can be potentially acceptable to users had they stated all of their

Conclusion and future work

We have developed a trust model for recommender agents, and we have shown that explanation interfaces have a great potential in building competence-inspired trust relationships with users. A carefully designed survey indicates that a recommender agent’s competence is positively correlated with users’ intention to return, but not necessarily with their intention to purchase. It also shows that an organization-based explanation interface is likely to be more effective than the simple ”why”

References (30)

  • S. Grabner-Kräuter et al.

    Empirical research in on-line trust: a review and critical assessment

    International Journal of Human-Computer Studies

    (2003)
  • D.A. Klein et al.

    A framework for explaining decision-theoretic advice

    Artificial Intelligence

    (1994)
  • R. Agrawal, T. Imielinski, A. Swami, Mining association rules between sets of items in large databases, in:...
  • E. Armengol et al.

    Individual prognosis of diabetes long-term risks: a CBR approach

    Methods of Information in Medicine

    (2001)
  • R. Burke et al.

    The FindMe approach to assisted browsing

    Journal of IEEE Expert

    (1997)
  • G. Carenini, J. Moore, Multimedia explanations in IDEA decision support system, Working Notes of the AAAI Spring...
  • L. Chen, P. Pu, Trust building in recommender agents, Workshop on Web Personalization, Recommender Systems and...
  • R.F. Falk et al.

    A Primer for Soft Modeling

    (1992)
  • B. Faltings, P. Pu, M. Torrens, P. Viappiani, Designing example-critiquing interaction, in: International Conference on...
  • J.F. Hair et al.

    Multivariate Data Analysis with Readings

    (1995)
  • J.L. Herlocker, J.A. Konstan, J. Riedl, Explaining collaborative filtering recommendations, in: ACM Conference on...
  • K. McCarthy, J. Reilly, L. McGinty, B. Smyth, Thinking positively – explanatory feedback for conversational recommender...
  • K. McCarthy, J. Reilly, L. McGinty, B. Smyth, Experiments in dynamic critiquing, in: International Conference on...
  • L. McGinty, B. Smyth, On the role of diversity in conversational recommender systems, in: Fifth International...
  • D.H. McKnight, N.L. Chervany, What Trust Means in e-commerce Customer Relationships: Conceptual Typology, International...
  • Cited by (201)

    View all citing articles on Scopus
    View full text