skip to main content
10.1145/2970398.2970421acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
short-paper

From "More Like This" to "Better Than This"

Published: 12 September 2016 Publication History

Abstract

In this paper we address a novel retrieval problem we term the "Better Than This" problem. For a given pair of a user query to be answered by some search engine and a single example answer provided by the user that may or may not be a correct answer to the query, we determine whether or not there exists some better answer within the search engine. The approach we take is to test whether the user's provided answer can be used for relevance feedback in order to improve the ability of the search engine to better answer the user's query. If this is indeed the case, then we determine that the original answer provided by the user is good enough and there is no need to consider a better alternative. Otherwise, we decide that the best alternative that the search engine can provide should be considered as a better answer. Using a simulation based evaluation, we demonstrate that, our approach provides a better decision making solution to this problem, compared to several other alternatives.

References

[1]
Niranjan Balasubramanian and James Allan. Learning to select rankers. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '10, pages 855--856, New York, NY, USA, 2010. ACM.
[2]
Niranjan Balasubramanian, Giridhar Kumaran, and Vitor R. Carvalho. Exploring reductions for long web queries. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '10, pages 571--578, New York, NY, USA, 2010. ACM.
[3]
Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. A framework for selective query expansion. In Proceedings of CIKM '04.
[4]
Peter Izsak, Fiana Raiber, Oren Kurland, and Moshe Tennenholtz. The search duel: A response to a strong ranker. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '14, pages 919--922, New York, NY, USA, 2014. ACM.
[5]
Victor Lavrenko and W. Bruce Croft. Relevance based language models. In Proceedings of SIGIR '01.
[6]
Ahmet Murat Ozdemiray and Ismail Sengor Altingovde. Query performance prediction for aspect weighting in search result diversification. In Proceedings of CIKM '14.
[7]
Daniel Sheldon, Milad Shokouhi, Martin Szummer, and Nick Craswell. Lambdamerge: Merging the results of query reformulations. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM '11, pages 795--804, New York, NY, USA, 2011. ACM.
[8]
Natali Soskin, Oren Kurland, and Carmel Domshlak. Navigating in the dark: Modeling uncertainty in ad hoc retrieval using multiple relevance models. In Proceedings of the 2Nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory, ICTIR '09, pages 79--91, Berlin, Heidelberg, 2009. Springer-Verlag.
[9]
Lidan Wang, Jimmy Lin, and Donald Metzler. A cascade ranking model for efficient ranked retrieval. In Proceedings of SIGIR '11.
[10]
Mattan Winaver, Oren Kurland, and Carmel Domshlak. Towards robust query expansion: Model selection in the language modeling framework. In Proceedings of SIGIR '07.
[11]
ChengXiang Zhai and John Lafferty. A risk minimization framework for information retrieval. Inf. Process. Manage., 42(1):31--55, January 2006.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICTIR '16: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval
September 2016
318 pages
ISBN:9781450344975
DOI:10.1145/2970398
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 September 2016

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Short-paper

Conference

ICTIR '16
Sponsor:

Acceptance Rates

ICTIR '16 Paper Acceptance Rate 41 of 79 submissions, 52%;
Overall Acceptance Rate 235 of 527 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 87
    Total Downloads
  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media