Skip to main content

Comparing Crowd-Based, Game-Based, and Machine-Based Approaches in Initial Query and Query Refinement Tasks

  • Conference paper
Advances in Information Retrieval (ECIR 2013)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7814))

Included in the following conference series:

Abstract

Human computation techniques have demonstrated their ability to accomplish portions of tasks that machine-based techniques find difficult. Query refinement is a task that may benefit from human involvement. We conduct an experiment that evaluates the contributions of two user types: student participants and crowdworkers hired from an online labor market. Human participants are assigned to use one of two query interfaces: a traditional web-based interface or a game-based interface. We ask each group to manually construct queries to respond to TREC information needs and calculate their resulting recall and precision. Traditional web interface users are provided feedback on their initial queries and asked to use this information to reformulate their original queries. Game interface users are provided with instant scoring and ask to refine their queries based on their scores. We measure the resulting feedback-based improvement on each group and compare the results from human computation techniques to machine-based algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ageev, M., Guo, Q., Lagun, D., Agichtein, E.: Find it if you can: a game for modeling different types of web search success using interaction data. In: Proceedings of SIGIR 2011, pp. 345–354. ACM, New York (2011)

    Google Scholar 

  2. Allan, J., Papka, R., Lavrenko, V.: On-line new event detection and tracking. In: Proceedings of SIGIR 1998, pp. 37–45. ACM, New York (1998)

    Google Scholar 

  3. Alonso, O., Lease, M.: Crowdsourcing for information retrieval: principles, methods, and applications. In: Proceedings of SIGIR 2011, pp. 1299–1300. ACM, New York (2011)

    Google Scholar 

  4. Alonso, O., Mizzaro, S.: Using crowdsourcing for TREC relevance assessment. Information Processing & Management 48(6), 1053–1066 (2012)

    Article  Google Scholar 

  5. Anick, P.: Using terminological feedback for web search refinement: a log-based study. In: Proceedings of SIGIR 2003, pp. 88–95. ACM, New York (2003)

    Google Scholar 

  6. Belkin, N.J., Cool, C., Kelly, D., Lin, S.-J., Park, S.Y., Perez-Carballo, J., Sikora, C.: Iterative exploration, design and evaluation of support for query reformulation in interactive information retrieval. Inf. Process. Manage. 37(3), 403–434 (2001)

    Article  MATH  Google Scholar 

  7. Bozzon, A., Brambilla, M., Ceri, S.: Answering search queries with CrowdSearcher. In: Proceedings of WWW 2012, Lyon, France, pp. 1009–1018. ACM, New York (2012)

    Google Scholar 

  8. Buckley, C., Voorhees, E.M.: Retrieval evaluation with incomplete information. In: Proceedings of SIGIR 2004, pp. 25–32. ACM, New York (2004)

    Google Scholar 

  9. Carvalho, V.R., Lease, M., Yilmaz, E.: Crowdsourcing for search evaluation. SIGIR Forum 44(2), 17–22 (2011)

    Article  Google Scholar 

  10. Dasdan, A., Drome, C., Kolay, S., Alpern, M., Han, A., Chi, T., Hoover, J., Davtchev, I., Verma, S.: Thumbs-Up: a game for playing to rank search results. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, Paris, France, pp. 36–37. ACM, New York (2009)

    Chapter  Google Scholar 

  11. Dillon, A., Song, M.: An empirical comparison of the usability for novice and expert searchers of a textual and a graphic interface to an art-resource database. Journal of Digital Information 1(1) (2006)

    Google Scholar 

  12. Efthimiadis, E.N.: Interactive query expansion: a user-based evaluation in a relevance feedback environment. J. Am. Soc. Inf. Sci. 51(11), 989–1003 (2000)

    Article  Google Scholar 

  13. Harris, C.G.: An Evaluation of Search Strategies for User-Generated Video Content. In: Proceedings of the WWW Workshop on Crowdsourcing Web Search, Lyon, France, pp. 48–53 (2012)

    Google Scholar 

  14. Harris, C.G., Srinivasan, P.: Applying Human Computation Mechanisms to Information Retrieval. In: Proceedings of 75th Annual Mtg of ASIS&T, Baltimore, MD (2012)

    Google Scholar 

  15. Joachims, T.: A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. DTIC Document, 143–151 (1996)

    Google Scholar 

  16. Jones, K.S.: A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 28(1), 11–21 (1972)

    Article  Google Scholar 

  17. Law, E., van Ahn, L., Mitchell, T.: Search war: a game for improving web search. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, Paris, France, p. 31. ACM, New York (2009)

    Chapter  Google Scholar 

  18. Lease, M., Yilmaz, E.: Crowdsourcing for information retrieval. SIGIR Forum 45(2), 66–75 (2012)

    Article  Google Scholar 

  19. McKibbon, K.A., Haynes, R.B., Walker Dilks, C.J., Ramsden, M.F., Ryan, N.C., Baker, L., Flemming, T., Fitzgerald, D.: How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches. Computers and Biomedical Research 23(6), 583–593 (1990)

    Article  Google Scholar 

  20. Milne, D., Nichols, D.M., Witten, I.H.: A competitive environment for exploratory query expansion. In: Proceedings of the 8th ACM/IEEE-CS Joint Conference on Digital libraries (JCDL 2008), pp. 197–200. ACM, New York (2008)

    Chapter  Google Scholar 

  21. Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M., Gatford, M.: Okapi at TREC-3. NIST Special Publication SP-1995), 109–121 (1995)

    Google Scholar 

  22. Rocchio, J.J.: Relevance feedback in information retrieval. In: Salton, G. (ed.) The SMART Retrieval System—Experiments in Automatic Document Processing, pp. 313–323. Prentice Hall, Englewood Cliffs (1971)

    Google Scholar 

  23. Ruthven, I.: Re-examining the potential effectiveness of interactive query expansion. In: Proceedings of SIGIR 2003, pp. 213–220. ACM, New York (2003)

    Google Scholar 

  24. Spink, A., Jansen, B.J., Wolfram, D., Saracevic, T.: From e-sex to e-commerce: Web search changes. Computer 35(3), 107–109 (2002)

    Article  Google Scholar 

  25. Strohman, T., Metzler, D., Turtle, H., Croft, W.B.: Indri: A language model-based search engine for complex queries. In: Proceedings of the International Conference on Intelligence Analysis, McLean, VA, Poster, May 2-6 (2005)

    Google Scholar 

  26. Turtle, H.: Natural language vs. Boolean query evaluation: a comparison of retrieval performance. In: Proceedings of SIGIR 1995, pp. 212–220. Springer, New York (1995)

    Google Scholar 

  27. Yan, T., Kumar, V., Ganesan, D.: CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones. In: Proceedings of MobiSys 2010, pp. 77–90. ACM, New York (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Harris, C.G., Srinivasan, P. (2013). Comparing Crowd-Based, Game-Based, and Machine-Based Approaches in Initial Query and Query Refinement Tasks. In: Serdyukov, P., et al. Advances in Information Retrieval. ECIR 2013. Lecture Notes in Computer Science, vol 7814. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36973-5_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36973-5_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36972-8

  • Online ISBN: 978-3-642-36973-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics