Skip to main content

Using Clicks as Implicit Judgments: Expectations Versus Observations

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4956))

Abstract

Clickthrough data has been the subject of increasing popularity as an implicit indicator of user feedback. Previous analysis has suggested that user click behaviour is subject to a quality bias—that is, users click at different rank positions when viewing effective search results than when viewing less effective search results. Based on this observation, it should be possible to use click data to infer the quality of the underlying search system. In this paper we carry out a user study to systematically investigate how click behaviour changes for different levels of search system effectiveness as measured by information retrieval performance metrics. Our results show that click behaviour does not vary systematically with the quality of search results. However, click behaviour does vary significantly between individual users, and between search topics. This suggests that using direct click behaviour—click rank and click frequency—to infer the quality of the underlying search system is problematic. Further analysis of our user click data indicates that the correspondence between clicks in a search result list and subsequent confirmation that the clicked resource is actually relevant is low. Using clicks as an implicit indication of relevance should therefore be done with caution.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agichtein, E., Brill, E., Dumais, S.: Improving web search ranking by incorporating user behavior information. In: Efthimiadis, et al. (eds.) [7], pp. 19–26.

    Google Scholar 

  2. Agichtein, E., Brill, E., Dumais, S., Ragno, R.: Learning user interaction models for predicting web search result preferences. In: Efthimiadis, et al. (eds.) [7], pp. 3–10.

    Google Scholar 

  3. Allan, J., Carterette, B., Lewis, J.: When will information retrieval be “good enough”? In: Marchionini, et al. (eds.) [15], pp. 433–440.

    Google Scholar 

  4. Bailey, P., Craswell, N., Hawking, D.: Engineering a multi-purpose test collection for web retrieval experiments. Information Processing and Management 39(6), 853–871 (2003)

    Article  Google Scholar 

  5. Buckley, C., Voorhees, E.M.: Retrieval system evaluation. In: TREC: experiment and evaluation in information retrieval [21]

    Google Scholar 

  6. Craswell, N., Szummer, M.: Random walks on the click graph. In: Kraaij, et al. (eds.) [14], pp. 239–246

    Google Scholar 

  7. Efthimiadis, E., Dumais, S., Hawking, D., Järvelin, K. (eds.): Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA (2006)

    Google Scholar 

  8. Fox, S., Karnawat, K., Mydland, M., Dumais, S., White, T.: Evaluating implicit measures to improve web search. ACM Transactions on Information Systems 23(2), 147–168 (2005)

    Article  Google Scholar 

  9. Harman, D.K.: The TREC test collection. In: TREC: experiment and evaluation in information retrieval [21]

    Google Scholar 

  10. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, Edmonton, Alberta, Canada, pp. 133–142. ACM Press, New York (2002)

    Chapter  Google Scholar 

  11. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately interpreting clickthrough data as implicit feedback. In: Marchionini,, et al. (eds.) [15], pp. 154–161.

    Google Scholar 

  12. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., Gay, G.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems 25(2), 7 (2007)

    Article  Google Scholar 

  13. Kemp, C., Ramamohanarao, K.: Long-term learning for web search engines. In: Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery, London, UK, pp. 263–274. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  14. Kraaij, W., de Vries, A., Clarke, C., Fuhr, N., Kando, N. (eds.): Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands (2007)

    Google Scholar 

  15. Marchionini, G., Moffat, A., Tait, J., Baeza-Yates, R., Ziviani, N. (eds.): Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil (2005)

    Google Scholar 

  16. Radlinski, F., Joachims, T.: Query chains: learning to rank from implicit feedback. In: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, Chicago, Illinois, USA, pp. 239–248 (2005)

    Google Scholar 

  17. Radlinski, F., Joachims, T.: Active exploration for learning rankings from clickthrough data. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, San Jose, California, pp. 570–579 (2007)

    Google Scholar 

  18. Turpin, A., Scholer, F.: User performance versus precision measures for simple search tasks. In: Efthimiadis, et al. (eds.) [7], pp. 11–18.

    Google Scholar 

  19. Turpin, A., Scholer, F., Billerbeck, B., Abel, L.: Examining the pseudo-standard web search engine results page. In: Proceedings of the 11th Australasian Document Computing Symposium, Brisbane, Australia, pp. 9–16 (2006)

    Google Scholar 

  20. Turpin, A., Tsegay, Y., Hawking, D., Williams, H.E.: Fast generation of result snippets in web search. In: Kraaij, et al. (eds.) [14], pp. 127–134.

    Google Scholar 

  21. Voorhees, E.M., Harman, D.K.: TREC: experiment and evaluation in information retrieval. MIT Press, Cambridge (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Craig Macdonald Iadh Ounis Vassilis Plachouras Ian Ruthven Ryen W. White

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Scholer, F., Shokouhi, M., Billerbeck, B., Turpin, A. (2008). Using Clicks as Implicit Judgments: Expectations Versus Observations. In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds) Advances in Information Retrieval. ECIR 2008. Lecture Notes in Computer Science, vol 4956. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78646-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78646-7_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78645-0

  • Online ISBN: 978-3-540-78646-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics