skip to main content
10.1145/2970398.2970427acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
short-paper
Public Access

Retrievability in API-Based "Evaluation as a Service"

Published: 12 September 2016 Publication History

Abstract

"Evaluation as a service" (EaaS) refers to a family of related evaluation methodologies that enables community-wide evaluations and the construction of test collections on documents that cannot be easily distributed. In the API-based approach, the basic idea is that evaluation organizers provide a service API through which the evaluation task can be completed, without providing access to the raw collection. One concern with this evaluation approach is that the API introduces biases and limits the diversity of techniques that can be brought to bear on the problem. In this paper, we tackle the question of API bias using the concept of retrievability. The raw data for our analyses come from a naturally-occurring experiment where we observed the same groups completing the same task with the API and also with access to the raw collection. We find that the retrievability bias of runs generated in both cases are comparable. Moreover, the fraction of relevant tweets retrieved through the API by the participating groups is at least as high as when they had access to the raw collection.

References

[1]
L. Azzopardi and V. Vinay. Retrievability: An evaluation measure for higher order information access tasks. CIKM, 2008.
[2]
D. Harman. Information Retrieval Evaluation. Morgan & Claypool Publishers, 2011.
[3]
F. Hopfgartner, A. Hanbury, H. Müller, N. Kando, S. Mercer, J. Kalpathy-Cramer, M. Potthast, T. Gollub, A. Krithara, J. Lin, K. Balog, and I. Eggel. Report on the Evaluation-as-a-Service (EaaS) Expert Workshop. SIGIR Forum, 49(1):57--65, 2015.
[4]
J. Lin and M. Efron. Evaluation as a service for information retrieval. SIGIR Forum, 47(2):8--14, 2013.
[5]
J. Lin and M. Efron. Overview of the TREC-2013 Microblog Track. TREC, 2013.
[6]
I. Ounis, C. Macdonald, J. Lin, and I. Soboroff. Overview of the TREC-2011 Microblog track. TREC, 2011.
[7]
E. M. Voorhees. The philosophy of information retrieval evaluation. CLEF, 2002.
[8]
E. M. Voorhees, J. Lin, and M. Efron. On run diversity in evaluation as a service. SIGIR, 2014.

Cited By

View all
  • (2020)ResearchInformation Retrieval: A Biomedical and Health Perspective10.1007/978-3-030-47686-1_8(337-405)Online publication date: 23-Jul-2020
  • (2017)Finally, a Downloadable Test Collection of TweetsProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3077136.3080667(1225-1228)Online publication date: 7-Aug-2017
  • (2017)EveTAR: building a large-scale multi-task test collection over Arabic tweetsInformation Retrieval Journal10.1007/s10791-017-9325-721:4(307-336)Online publication date: 21-Dec-2017
  • Show More Cited By

Index Terms

  1. Retrievability in API-Based "Evaluation as a Service"

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICTIR '16: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval
    September 2016
    318 pages
    ISBN:9781450344975
    DOI:10.1145/2970398
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 September 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Best Short Paper

    Author Tags

    1. meta-evaluation
    2. trec
    3. tweet search

    Qualifiers

    • Short-paper

    Funding Sources

    Conference

    ICTIR '16
    Sponsor:

    Acceptance Rates

    ICTIR '16 Paper Acceptance Rate 41 of 79 submissions, 52%;
    Overall Acceptance Rate 235 of 527 submissions, 45%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)32
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 07 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2020)ResearchInformation Retrieval: A Biomedical and Health Perspective10.1007/978-3-030-47686-1_8(337-405)Online publication date: 23-Jul-2020
    • (2017)Finally, a Downloadable Test Collection of TweetsProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3077136.3080667(1225-1228)Online publication date: 7-Aug-2017
    • (2017)EveTAR: building a large-scale multi-task test collection over Arabic tweetsInformation Retrieval Journal10.1007/s10791-017-9325-721:4(307-336)Online publication date: 21-Dec-2017
    • (2017)Towards Privacy-Preserving Evaluation for Information Retrieval Models Over Industry Data SetsInformation Retrieval Technology10.1007/978-3-319-70145-5_16(210-221)Online publication date: 8-Nov-2017

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media