Skip to main content

Simulating Simple and Fallible Relevance Feedback

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6611))

Abstract

Much of the research in relevance feedback (RF) has been performed under laboratory conditions using test collections and either test persons or simple simulation. These studies have given mixed results. The design of the present study is unique. First, the initial queries are realistically short queries generated by real end-users. Second, we perform a user simulation with several RF scenarios. Third, we simulate human fallibility in providing RF, i.e., incorrectness in feedback. Fourth, we employ graded relevance assessments in the evaluation of the retrieval results. The research question is: how does RF affect IR performance when initial queries are short and feedback is fallible? Our findings indicate that very fallible feedback is no different from pseudo-relevance feedback (PRF) and not effective on short initial queries. However, RF with empirically observed fallibility is as effective as correct RF and able to improve the performance of short initial queries.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Azzopardi, L., Järvelin, K., Kamps, J., Smucker, M.: Report on the SIGIR 2010 Workshop on the Simulation of Interaction. SIGIR Forum 44(2), 35–47 (2010)

    Article  Google Scholar 

  2. Efthimiadis, E.N.: Query expansion. In: Williams, M.E. (ed.) Annual Review of Information Science and Technology ARIST, vol. 31, pp. 121–187. Information Today, Inc., Medford (1996)

    Google Scholar 

  3. Foley, C., Smeaton, A.F.: Synchronous Collaborative Information Retrieval: Techniques and Evaluation. In: Boughanem, M., Berrut, C., Mothe, J., Soule-Dupuy, C. (eds.) ECIR 2009. LNCS, vol. 5478, pp. 42–53. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  4. Jansen, M.B.J., Spink, A., Saracevic, T.: Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web. Information Processing & Management 36(2), 207–227 (2000)

    Article  Google Scholar 

  5. Järvelin, K.: Interactive Relevance Feedback with Graded Relevance and Sentence Extraction: Simulated User Experiments. In: Cheung, D., et al. (eds.) Proceedings of the 18th ACM Conference on Information and Knowledge Management (ACM CIKM 2009), Hong Kong, November 2-6, pp. 2053–2056 (2009)

    Google Scholar 

  6. Keskustalo, H., Järvelin, K., Pirkola, A.: The Effects of Relevance Feedback Quality and Quantity in Interactive Relevance Feedback: A Simulation Based on User Modeling. In: Lalmas, M., MacFarlane, A., Rüger, S.M., Tombros, A., Tsikrika, T., Yavlinsky, A. (eds.) ECIR 2006. LNCS, vol. 3936, pp. 191–204. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  7. Keskustalo, H., Järvelin, K., Pirkola, A.: Evaluating the Effectiveness of Relevance Feedback Based on a User Simulation Model: Effects of a User Scenario on Cumulated Gain Value. Information Retrieval 11(5), 209–228 (2008)

    Article  Google Scholar 

  8. Lam-Adesina, A.M., Jones, G.J.F.: Applying Summarization Techniques for Term Selection in Relevance Feedback. In: Proc. of the 24th Annual ACM Conference on Research and Development in Information Retrieval, pp. 1–9. ACM Press, New York (2001)

    Google Scholar 

  9. Marchionini, G., Dwiggins, S., Katz, A., Lin, X.: Information seeking in full-text end-user-oriented search systems: The roles of domain and search expertise. Library and Information Science Research 15(1), 35–70 (1993)

    Google Scholar 

  10. Pirkola, A., Keskustalo, H., Leppänen, E., Känsälä, A.-P., Järvelin, K.: Targeted S-Gram Matching: A Novel N-Gram Matching Technique for Cross- and Monolin-gual Word Form Variants. Information Research 7(2) (2002), http://InformationR.net/ir/7-2/paper126.html

  11. Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review 18(2), 95–145 (2003)

    Article  Google Scholar 

  12. Ruthven, I., Lalmas, M., van Rijsbergen, K.: Incorporating user search behaviour into relevance feedback. Journal of the American Society for Information Science and Technology 54(6), 529–549 (2003)

    Article  Google Scholar 

  13. Sihvonen, A., Vakkari, P.: Subject knowledge improves interactive query expansion assisted by a thesaurus. J. Doc. 60(6), 673–690 (2004)

    Article  Google Scholar 

  14. Sormunen, E.: Liberal Relevance Criteria of TREC - Counting on Negligible Documents? In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 320–330. ACM Press, New York (2002)

    Google Scholar 

  15. Stenmark, D.: Identifying Clusters of User Behavior in Intranet Search Engine Log Files. Journal of the American Society for Information Science and Technology 59(14), 2232–2243 (2008)

    Article  Google Scholar 

  16. Tombros, A., Sanderson, M.: Advantages of query biased summaries in information retrieval. In: Proc. of the 21st Annual ACM Conference on Research and Development in Information Retrieval, pp. 2–10. ACM Press, New York (1998)

    Google Scholar 

  17. Turpin, A., et al.: Including Summaries in System Evaluation. In: Proc. of the 32nd Annual ACM Conference on Research and Development in Information Retrieval, pp. 508–515. ACM Press, New York (2009)

    Google Scholar 

  18. Vakkari, P., Sormunen, E.: The influence of relevance levels on the effectiveness of interactive IR. J. Am. Soc. Inf. Sci. Tech. 55(11), 963–969 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Baskaya, F., Keskustalo, H., Järvelin, K. (2011). Simulating Simple and Fallible Relevance Feedback. In: Clough, P., et al. Advances in Information Retrieval. ECIR 2011. Lecture Notes in Computer Science, vol 6611. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20161-5_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-20161-5_59

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-20160-8

  • Online ISBN: 978-3-642-20161-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics