skip to main content
10.1145/3477495.3531893acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Can Users Predict Relative Query Effectiveness?

Published: 07 July 2022 Publication History

Abstract

Any given information need can be expressed via a wide range of possible queries. Recent work with such query variations has demonstrated that different queries can fetch notably divergent sets of documents, even when the queries have identical intents and superficial similarity. That is, different users might receive SERPs of quite different effectiveness for the same information need. That observation then raises an interesting question: do users have a sense of how useful any given query will be? Can they anticipate the effectiveness of alternative queries for the same retrieval need? To explore that question we designed and carried out a crowd-sourced user study in which we asked subjects to consider an information need statement expressed as a backstory, and then provide their opinions as to the relative usefulness of a set of queries ostensibly addressing that objective. We solicited opinions using two different interfaces: one that collected absolute ratings of queries, and one that required that the subjects place a set of queries into "order". We found that crowd workers are reasonably consistent in their estimates of how effective queries are likely to be, and also that their estimates correlate positively with actual system performance.

Supplementary Material

MP4 File (SIGIR22-sp2005.mp4)
Short presentation video of the data collection process and main results

References

[1]
P. Bailey, A. Moffat, F. Scholer, and P. Thomas. UQV: A test collection with query variability. In Proc. SIGIR, pages 725--728, 2016. Public data: http://dx.doi.org/10.4225/49/5726E597B8376 .
[2]
P. Bailey, A. Moffat, F. Scholer, and P. Thomas. Retrieval consistency in the presence of query variations. In Proc. SIGIR, pages 395--404, 2017.
[3]
R. Benham, J. Mackenzie, A. Moffat, and J. S. Culpepper. Boosting search performance using query variations. ACM Trans. Inf. Sys., 37 (4): 41.1--41.25, 2019.
[4]
C. Buckley and J. Walz. The TREC-8 query track. In Proc. TREC, 1999.
[5]
D. Carmel and E. Yom-Tov. Estimating the query difficulty for information retrieval. Number 15 in Synthesis Lectures on Information Concepts, Retrieval, and Services. Morgan & Claypool, 2010.
[6]
N. Craswell, B. Mitra, E. Yilmaz, D. Campos, and E. M. Voorhees. Overview of the TREC 2019 deep learning track. In Proc. TREC, 2020.
[7]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. In Proc. SIGIR, page 299--306, 2002.
[8]
G. Faggioli, O. Zendel, J. S. Culpepper, N. Ferro, and F. Scholer. An enhanced evaluation framework for query performance prediction. In Proc. ECIR, pages 115--129, 2021.
[9]
C. Hauff, D. Hiemstra, and F. De Jong. A survey of pre-retrieval query performance predictors. In Proc. CIKM, pages 1419--1420, 2008.
[10]
C. Hauff, D. Kelly, and L. Azzopardi. A comparison of user and system query performance predictions. In Proc. CIKM, page 979--988, 2010.
[11]
B. He and I. Ounis. Inferring query performance using pre-retrieval predictors. In Proc. SPIRE, pages 43--54, 2004.
[12]
J. Mackenzie, R. Benham, M. Petri, J. R. Trippas, J. S. Culpepper, and A. Moffat. CC-News-En: A large English news corpus. In Proc. CIKM, pages 3077--3084, 2020.
[13]
A. Moffat. Judgment pool effects caused by query variations. In Proc. Aust. Doc. Comp. Symp., pages 65--68, 2016.
[14]
A. Moffat, F. Scholer, P. Thomas, and P. Bailey. Pooled evaluation over query variations: Users are as diverse as systems. In Proc. CIKM, pages 1759--1762, 2015.
[15]
A. Moffat, P. Bailey, F. Scholer, and P. Thomas. Incorporating user expectations and behavior into the measurement of search effectiveness. ACM Trans. Inf. Sys., 35 (3): 24:1--24:38, 2017.
[16]
I. Ounis, G. Amati, P. V., B. He, C. Macdonald, and Johnson. Terrier information retrieval platform. In Proc. ECIR, pages 517--519, 2005.
[17]
J. Peng, C. Macdonald, B. He, V. Plachouras, and I. Ounis. Incorporating term dependency in the DFR framework. In Proc. SIGIR, pages 843--844, 2007.
[18]
T. Sakai and Z. Zeng. Retrieval evaluation measures that agree with users' SERP preferences: Traditional, preference-based, and diversity measures. ACM Trans. Inf. Sys., 39 (2): 14:1--14:35, 2021.
[19]
A. Shtok, O. Kurland, D. Carmel, F. Raiber, and G. Markovits. Predicting query performance by query-drift estimation. ACM Trans. Inf. Sys., 30 (2): 1--35, 2012.
[20]
k Jones and Bates(1977)]sparck1977reportK. Sp"arck Jones and R. G. Bates. Report on the design study for the “ideal” information retrieval test collection. Technical Report 5428, Computer Laboratory, University of Cambridge, 1977. British Library Research and Development Report.
[21]
P. Thomas, F. Scholer, P. Bailey, and A. Moffat. Task, queries, and rankers in pre-retrieval performance prediction. In Proc. Aust. Doc. Comp. Symp., pages 11.1--11.4, 2017.
[22]
A. Turpin and F. Scholer. User performance versus precision measures for simple search tasks. In Proc. SIGIR, page 11--18, 2006.
[23]
W.-C. Wu, D. Kelly, and K. Huang. User evaluation of query quality. In Proc. SIGIR, page 215--224, 2012.
[24]
O. Zendel, A. Shtok, F. Raiber, O. Kurland, and J. S. Culpepper. Information needs, queries, and query performance prediction. In Proc. SIGIR, page 395--404, 2019.
[25]
O. Zendel, J. S. Culpepper, and F. Scholer. Is query performance prediction with multiple query variations harder than topic performance prediction? In Proc. SIGIR, pages 1713--1717, 2021.
[26]
Y. Zhao, F. Scholer, and Y. Tsegay. Effective pre-retrieval query performance prediction using similarity and variability evidence. In Proc. ECIR, page 52--64, 2008.
[27]
Y. Zhou and W. B. Croft. Ranking robustness: A novel framework to predict query performance. In Proc. CIKM, pages 567--574, 2006.
[28]
Y. Zhou and W. B. Croft. Query performance prediction in web search environments. In Proc. SIGIR, pages 543--550, 2007.
[29]
G. Zuccon, J. Palotti, and A. Hanbury. Query variations and their effect on comparing information retrieval systems. In Proc. CIKM, pages 691--700, 2016.

Cited By

View all

Index Terms

  1. Can Users Predict Relative Query Effectiveness?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2022
    3569 pages
    ISBN:9781450387323
    DOI:10.1145/3477495
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 July 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. query performance prediction
    2. query variations

    Qualifiers

    • Short-paper

    Funding Sources

    • Australian Research Councili

    Conference

    SIGIR '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 167
      Total Downloads
    • Downloads (Last 12 months)30
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media