Balancing user satisfaction and cognitive load in coverage-optimised retrieval

https://doi.org/10.1016/j.knosys.2004.03.006Get rights and content

Abstract

Coverage-optimised retrieval (CORE) is a new case-based reasoning (CBR) approach to product recommendation which ensures that for any case that is acceptable to the user, one of the recommended cases is at least as good in an objective sense and so also likely to be acceptable. Similarity to the users query, the standard criterion on which recommendations are based in CBR, is only one of several preference criteria according to which a given case may be considered at least as good as another in CORE. We present a detailed analysis of retrieval in CORE and the trade-off between user satisfaction and cognitive load in the approach.

Introduction

An advantage of case-based reasoning (CBR) as an approach to product recommendation is that if none of the available products exactly matches the user's query, she can be shown the cases that are most similar to her query [1]. A basic premise in the approach is that one of the recommended cases may be acceptable to the user even though it fails to satisfy one or more of her requirements. However, several authors have questioned the assumption that the most similar product is the one that is most acceptable to the user [2], [3], [4]. For example, a case that satisfies only two of the user's requirements may be more similar than one that satisfies three of her requirements; or the most similar case may fail to satisfy a requirement that the user considers to be essential. The k-NN strategy of retrieving the k most similar cases, rather than a single case, only partially compensates for this limitation, as the number of cases that can be presented to the user is necessarily restricted in practice [4], [5].

Thus, the existence of a case in the case library that would be acceptable to the user does not guarantee that it will be retrieved. While retrieving every case in the case library is seldom feasible in practice, we argue that the next best thing is to ensure that for any case that is acceptable to the user, the retrieval set contains a case that is as at least as good in an objective sense and so also likely to be acceptable. This is the aim of a new approach to retrieval in recommender systems which we refer to as coverage-optimised retrieval (CORE) [6].

The CORE retrieval set provides full coverage of the case library in the sense that for any case that is acceptable to the user, one of the recommended cases is at least as good according to a given preference criterion. Similarity to the target query is the weakest of several preference criteria according to which a given case may be considered at least as good as another in CORE. A basic premise in the approach is that the stronger the preference criterion on which a recommendation is based, the more likely it is to be acceptable to the user. However, the size of the retrieval sets that a retrieval algorithm produces is an important factor in its ability to address the trade-off between user satisfaction and cognitive load to which Branting [5] refers. The strength of the preference criterion on which retrieval is based in CORE must, therefore, be balanced against the size of the retrieval set required to provide full coverage of the case library. Here we present a detailed analysis of retrieval in CORE that aims to increase our understanding of the trade-off between user satisfaction and cognitive load in the approach.

In Section 2, we examine four different preference criteria on which retrieval may be based in CORE. We also show that k-NN provides only limited coverage of the case library with respect to preference criteria that may be more predictive of user satisfaction than similarity to the target query. In Section 3, we describe how the retrieval set is constructed in CORE and examine factors that affect the trade-off between user satisfaction and cognitive load. Related work is discussed in Section 4 and our conclusions are presented in Section 5.

Section snippets

The CORE preference criteria

In this section, we examine four preference criteria that can be used to guide the retrieval process in CORE. We also present an empirical evaluation of the coverage provided by k-NN with respect to these preference criteria.

The CORE retrieval set

The aim in CORE is to construct a retrieval set of the smallest possible size that provides full coverage of the case library with respect to a given preference criterion. In this section, we describe how such a retrieval set is constructed and examine factors that influence the size of the retrieval set required to provide full coverage of the case library.

Related work

Recent research on CBR approaches to product recommendation has highlighted a number of problems associated with similarity-based retrieval. As mentioned in Section 1, one problem is that the most similar case may not be the one that is most acceptable to the user [2], [3], [4]. A related issue is that the most similar cases also tend to be very similar to each other, with the result that the user may be offered a very limited choice [4], [11]. More specifically, the retrieved cases may not be

Conclusions

CORE is a generalisation of similarity-based retrieval in recommender systems which ensures that the retrieval set for a target query provides full coverage of the case library with respect to a given preference criterion. That is, for any case that is acceptable to the user, the retrieval set is guaranteed to contain a case that is at least as good according to the given preference criterion, and so also likely to be acceptable. As might be expected, there is a trade-off between the strength

References (16)

  • W Wilke et al.

    Intelligent sales support with CBR

  • H.-D Burkhard

    Extending some concepts of CBR—foundations of case retrieval nets

  • D McSherry

    Similarity and compromise

    (2003)
  • B Smyth et al.

    Similarity vs. diversity

    (2001)
  • L.K Branting

    Acquiring customer preferences from return-set selections

    (2001)
  • D McSherry

    Coverage-optimized retrieval

    (2003)
  • R Bergmann et al.

    Developing Industrial Case-Based Reasoning Applications: The INRECA Methodology

    (1999)
  • A Stahl

    Defining similarity measures: top-down vs. bottom–up

    (2002)
There are more references available in the full text version of this article.

Cited by (8)

  • Reutilization of diagnostic cases by adaptation of knowledge models

    2013, Engineering Applications of Artificial Intelligence
    Citation Excerpt :

    In our work, we take into account this synergy and we combine the similarity measure with other criteria to retrieve the most adaptable case. According to Lopez de Mantaras et al. (2005), six types of retrieval related to the adaptation are identified: Diversity-Conscious Retrieval (Smyth and McClave, 2001; McSherry, 2002; McGinty and Smyth, 2003), Compromise-Driven Retrieval (McSherry, 2003, 2004), Order-Based Retrieval (Althoff and Bartsch-Spörl, 1996; Bridge and Ferguson, 2002) Explanation-oriented retrieval (Cunningham et al., 2003; Doyle et al., 2004), Optimization-Based Retrieval (Mougouie and Bergmann, 2002; Tartakovski et al., 2004) and adaptation-guided retrieval (AGR). This present work is based on AGR.

  • A user-oriented collaborative filtering algorithm for recommender systems

    2018, PDGC 2018 - 2018 5th International Conference on Parallel, Distributed and Grid Computing
  • Why Did Naethan Pick Android over Apple? Exploiting Trade-offs in Learning User Preferences

    2018, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
View all citing articles on Scopus
View full text