Skip to main content

Discounted Cumulative Gain and User Decision Models

  • Conference paper
Book cover String Processing and Information Retrieval (SPIRE 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7024))

Included in the following conference series:

Abstract

We propose to explain Discounted Cumulative Gain (DCG) as the consequences of a set of hypothesis, in a generative probabilistic model, on how users browse the result page ranking list of a search engine. This exercise of reconstructing a user model from a metric allows us to show that it is possible to estimate from data the numerical values of the discounting factors. It also allows us to compare different candidate user models in terms of their ability to describe the observed data, and hence to select the best one. It is generally not possible to relate the performance of a ranking function in terms of DCG with the clicks observed after the function is deployed on a production environment. We show in this paper that a user model make this possible. Finally, we show that DCG can be interpreted as a measure of the utility a user gains per unit of effort she is ready to allocate. This contrasts nicely with a recent interpretation given to average precision (AP), another popular Information Retrieval metric, as a measure of effort needed to achieve a unit of utility [7].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bollmann, P., Raghavan, V.V.: A utility-theoretic analysis of expected search length. In: SIGIR 1988, pp. 245–256. ACM, New York (1988)

    Google Scholar 

  2. Buckley, C., Voorhees, E.M.: Retrieval evaluation with incomplete information. In: SIGIR 2004, pp. 25–32. ACM, New York (2004)

    Google Scholar 

  3. Carterette, B., Jones, R.: Evaluating search engines by modeling the relationship between relevance and clicks. Advances in Neural Information Processing Systems 20, 217–224 (2008)

    Google Scholar 

  4. Craswell, N., Zoeter, O., Taylor, M., Ramsey, B.: An experimental comparison of click position-bias models. In: First ACM International Conference on Web Search and Data Mining, WSDM 2008 (2008)

    Google Scholar 

  5. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. B 39, 1–38 (1977)

    MathSciNet  MATH  Google Scholar 

  6. Dupret, G.: User models to compare and evaluate web IR metrics. In: Proceedings of SIGIR 2009 Workshop on The Future of IR Evaluation (2009), http://staff.science.uva.nl/k̃amps/ireval/papers/georges.pdf

    Google Scholar 

  7. Dupret, G., Piwowarski, B.: A User Behavior Model for Average Precision and its Generalization to Graded Judgments. In: Proceedings of the 33th ACM SIGIR Conference (2010)

    Google Scholar 

  8. Führ, N.: A probability ranking principle for interactive information retrieval. In: Information Retrieval. Springer, Heidelberg (2008)

    Google Scholar 

  9. Granka, L., Joachims, T., Gay, G.: Eye-tracking analysis of user behavior in www search. In: ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 478–479 (2004)

    Google Scholar 

  10. Guo, F., Liu, C., Wang, Y.M.: Efficient multiple-click models in web search. In: WSDM 2009: Proceedings of the Second ACM International Conference on Web Search and Data Mining, pp. 124–131. ACM, New York (2009)

    Google Scholar 

  11. Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (ACM TOIS) 20(4), 222–246 (2002)

    Google Scholar 

  12. Kelly, D.: Methods for Evaluating Interactive Information Retrieval Systems with Users. Foundations and Trends in Information Retrieval, vol. 3 (2009)

    Google Scholar 

  13. Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27(1), 1–27 (2008)

    Article  Google Scholar 

  14. Robertson, S.: A new interpretation of average precision. In: SIGIR 2008, pp. 689–690. ACM, New York (2008)

    Google Scholar 

  15. Voorhees, E.M., Harman, D. (eds.): TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge (2005)

    Google Scholar 

  16. Yilmaz, E., Aslam, J.A., Robertson, S.: A new rank correlation coefficient for information retrieval. In: SIGIR 2008: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 587–594. ACM, New York (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dupret, G. (2011). Discounted Cumulative Gain and User Decision Models. In: Grossi, R., Sebastiani, F., Silvestri, F. (eds) String Processing and Information Retrieval. SPIRE 2011. Lecture Notes in Computer Science, vol 7024. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24583-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24583-1_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24582-4

  • Online ISBN: 978-3-642-24583-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics