Skip to main content

Exploring Cost-Effective Approaches to Human Evaluation of Search Engine Relevance

  • Conference paper
Book cover Advances in Information Retrieval (ECIR 2005)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3408))

Included in the following conference series:

Abstract

In this paper, we examine novel and less expensive methods for search engine evaluation that do not rely on document relevance judgments. These methods, described within a proposed framework, are motivated by the increasing focus on search results presentation, by the growing diversity of documents and content sources, and by the need to measure effectiveness relative to other search engines. Correlation analysis of the data obtained from actual tests using a subset of the methods in the framework suggest that these methods measure different aspects of the search engine. In practice, we argue that the selection of the test method is a tradeoff between measurement intent and cost.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cleverdon, C.: The significance of the cranfield tests on index languages. In: Proceedings of the SIGIR Conference on Research and Development in Information Retrieval, pp. 3–12 (1991)

    Google Scholar 

  2. Amitay, E., Carmel, D., Lempel, R., Soffer, A.: Scaling IR-System Evaluatin using Term Relevance Sets. In: Proceedings of SIGIR 2004, Sheffield, UK, pp. 10–17 (2004)

    Google Scholar 

  3. Buckley, C., Voorhees, E.: Retrieval Evaluation with Incomplete Information. In: Proceedings of SIGIR 2004, Sheffield, UK, pp. 25–32 (2004)

    Google Scholar 

  4. Voorhees, E.M.: The philosophy of information retrieval evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  5. Buckley, C., Voorhees, E.: Evaluating evaluation measure stability. In: Proceedings of SIGIR 2000, pp. 33–40 (2000)

    Google Scholar 

  6. Zobel, J.: How reliable are the results of large-scale information retrieval experiments? In: Proceedings of SIGIR 1998, Melbourne, Australia, pp. 307–314 (1998)

    Google Scholar 

  7. Gabrieli, S., Mizzaro, S.: Negotiating a Multidimensional Framework for Relevance Space. In: Proceedings of MIRA Conference (1999)

    Google Scholar 

  8. Jarvelin, K., Kekalainen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems(ACM TOIS) 20(4), 422–446

    Google Scholar 

  9. Joachims, T.: Evaluating Retrieval Performance Using Clickthrough Data. In: Proceedings of the SIGIR Workshop on Mathematical/Formal Models in Information Retrieval (2002)

    Google Scholar 

  10. Mizarro, S.: How Many Relevances in Information Retrieval? Interacting With Computers 10(3), 305–322 (1998)

    Google Scholar 

  11. Chang, C. and Ali, K.: How much correlation is there from one judge to another? Yahoo! Technical Report, 2004-12

    Google Scholar 

  12. Silverstein, C., Henzinger, M., Marais, H. and Moricz, M.: Analysis of a Very Large AltaVista Query Log, SRC Technical Note #1998-14

    Google Scholar 

  13. Amento, B., Terveen, L., Hill, W.D.: Does ’Authority’ Mean Quailty? Predicting Expert Quality Ratings of Web Sites. In: Proceedings of SIGIR 2000, Athens, Greece (2000)

    Google Scholar 

  14. Harter, S.: Variations in Relevance Assessments and the Measurement of Retrieval Effectiveness. JASIS 47(1), 37–49 (1996)

    Article  Google Scholar 

  15. Brin, S., Page, L.: The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems 30(1-7), 107–117 (1998)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ali, K., Chang, CC., Juan, Y. (2005). Exploring Cost-Effective Approaches to Human Evaluation of Search Engine Relevance. In: Losada, D.E., Fernández-Luna, J.M. (eds) Advances in Information Retrieval. ECIR 2005. Lecture Notes in Computer Science, vol 3408. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-31865-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-31865-1_26

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25295-5

  • Online ISBN: 978-3-540-31865-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics