skip to main content
10.1145/1835449.1835538acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

A user behavior model for average precision and its generalization to graded judgments

Published:19 July 2010Publication History

ABSTRACT

We explore a set of hypothesis on user behavior that are potentially at the origin of the (Mean) Average Precision (AP) metric. This allows us to propose a more realistic version of AP where users click non-deterministically on relevant documents and where the number of relevant documents in the collection needs not be known in advance. We then depart from the assumption that a document is either relevant or irrelevant and we use instead relevance judgment similar to editorial labels used for Discounted Cumulated Gain (DCG). We assume that clicked documents provide users with a certain level of "utility" and that a user ends a search when she gathered enough utility. Based on the query logs of a commercial search engine we show how to evaluate the utility associated with a label from the record of past user interactions with the search engine and we show how the two different user models can be evaluated based on their ability to predict accurately future clicks. Finally, based on these user models, we propose a measure that captures the relative quality of two rankings.

References

  1. G. Dupret. User models to compare and evaluate web IR metrics. In Proceedings of SIGIR 2009 Workshop on The Future of IR Evaluation, 2009.Google ScholarGoogle Scholar
  2. G. Dupret and C. Liao. Estimating intrinsic document relevance from clicks. In Proceedings of the 3rd WSDM conference, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Kelly. Methods for Evaluating Interactive Information Retrieval Systems with Users, volume 3 of Foundations and Trends in Information Retrieval. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Moffat and J. Zobel. Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst., 27(1):1--27, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Robertson. A new interpretation of average precision. In Proceedings of SIGIR'08, pages 689--690, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. M. Voorhees and D. Harman, editors. TREC: Experiment and Evaluation in Information Retrieval. MIT press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. K. Wang, T. Walker, and Z. Zheng. Pskip: estimating relevance ranking quality from web search clickthrough data. In Proceedings of the 15th ACM SIGKDD, pages 1355--1364, New York, NY, USA, 2009. ACM. More complete references for this work can be found in {1}. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A user behavior model for average precision and its generalization to graded judgments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
        July 2010
        944 pages
        ISBN:9781450301534
        DOI:10.1145/1835449

        Copyright © 2010 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 19 July 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        SIGIR '10 Paper Acceptance Rate87of520submissions,17%Overall Acceptance Rate792of3,983submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader