skip to main content
10.1145/2851613.2851693acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Direct measurement of training query quality for learning to rank

Published: 04 April 2016 Publication History

Abstract

The conventional application of learning to rank algorithms tends to use as many training queries as possible to leverage the benefit brought by a large amount of labeled data. However, the use of all training queries available may also include the low quality ones, and consequently, degrades the retrieval effectiveness, hence the need for selecting training queries. Existing training query selection approaches incorporate a variety of indirect indicators of the training queries such as the query performance predictors and the relevance scores into a classification or regression based approach. In this paper, we propose to select training queries by the direct measurement of the training query quality, namely the resulting retrieval performance on a subset of validation queries, instead of the indirect indicators that may not have strong correlations with a training query's quality. Evaluation on the standard LETOR 4.0 dataset shows that our proposed approach outperforms the state-of-the-art baselines.

References

[1]
Javed A. Aslam, Evangelos Kanoulas, Virgil Pavlu, Stefan Savev, and Emine Yilmaz. Document selection methodologies for efficient and effective learning-to-rank. In Proc, SIGIR '09, pages 468--475, New York, NY, USA, 2009. ACM.
[2]
Ricardo A. Baeza-Yates and Berthier Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1999.
[3]
Mustafa Bilgic and Paul N. Bennett. Active query selection for learning rankers. In Poster-Paper in Proc ACM, August 2012.
[4]
Yunbo Cao, Jun Xu, Tie-Yan Liu, Hang Li, Yalou Huang, and Hsiao-Wuen Hon. Adapting ranking svm to document retrieval. In Proc, SIGIR '06, pages 186--193, New York, NY, USA, 2006. ACM.
[5]
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In ICML '07: Proc, pages 129--136, New York, NY, USA, 2007. ACM.
[6]
Ben Carterette, James Allan, and Ramesh Sitaraman. Minimal test collections for retrieval evaluation. In Proc, SIGIR '06, pages 268--275, New York, NY, USA, 2006. ACM.
[7]
Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4:933--969, 2003.
[8]
Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large Margin Rank Boundaries for Ordinal Regression, chapter 7, page 115--132. MIT Press, January 2000.
[9]
Dongxing Li, Ben He, Tiejian Luo, and Xin Zhang. Selecting training data for learning-based twitter search. In Proc, ECIR '15, pages 501--506. Springer International Publishing.
[10]
Tie-Yan Liu. Learning to rank for information retrieval. Found. Trends Inf. Retr., 3(3):225--331, March 2009.
[11]
Bo Long, Olivier Chapelle, Ya Zhang, Yi Chang, Zhaohui Zheng, and Belle Tseng. Active learning for ranking through expected loss optimization. In Proc, SIGIR '10, pages 267--274, New York, NY, USA, 2010. ACM.
[12]
Natasa Milic-Frayling Milad Shokouhi Mehdi Hosseini, Ingemar J. Cox and Emine Yilmaz. An uncertainty-aware query selection model for evaluation of ir systems. In SIGIR '12, pages 901--910, 2012.
[13]
Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. CoRR, abs/1306.2597, 2013.
[14]
Jun Xu and Hang Li. Adarank: a boosting algorithm for information retrieval. In SIGIR '07: Proc, pages 391--398, New York, NY, USA, 2007. ACM.
[15]
Linjun Yang, Li Wang, Bo Geng, and Xian-Sheng Hua. Query sampling for ranking learning in web search. In Proc, SIGIR '09, pages 754--755, New York, NY, USA, 2009. ACM.
[16]
Emine Yilmaz and Stephen Robertson. Deep versus shallow judgments in learning to rank. In Proc, SIGIR '09, pages 662--663, New York, NY, USA, 2009. ACM.
[17]
Hwanjo Yu. Svm selective sampling for ranking with application to data retrieval. In Proc, KDD '05, pages 354--363, New York, NY, USA, 2005. ACM.
[18]
J. Kennedy and R. Eberhart. Particle swarm optimization. In Proc, IEEE International Conference on, volume 4, pages 1942--1948 vol. 4, Nov 1995.
[19]
Riccardo Poli, William B. Langdon, and Nicholas Freitag McPhee. A Field Guide to Genetic Programming. Lulu Enterprises, UK Ltd, 2008.

Cited By

View all
  • (2019)ERR.RankApplied Intelligence10.1007/s10489-018-1330-z49:3(1185-1199)Online publication date: 1-Mar-2019
  • (2017)LEARning Next gEneration Rankers (LEARNER 2017)Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3121050.3121110(331-332)Online publication date: 1-Oct-2017

Index Terms

  1. Direct measurement of training query quality for learning to rank

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SAC '16: Proceedings of the 31st Annual ACM Symposium on Applied Computing
    April 2016
    2360 pages
    ISBN:9781450337397
    DOI:10.1145/2851613
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 April 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. information retrieval
    2. learning to rank

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    SAC 2016
    Sponsor:
    SAC 2016: Symposium on Applied Computing
    April 4 - 8, 2016
    Pisa, Italy

    Acceptance Rates

    SAC '16 Paper Acceptance Rate 252 of 1,047 submissions, 24%;
    Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

    Upcoming Conference

    SAC '25
    The 40th ACM/SIGAPP Symposium on Applied Computing
    March 31 - April 4, 2025
    Catania , Italy

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 27 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)ERR.RankApplied Intelligence10.1007/s10489-018-1330-z49:3(1185-1199)Online publication date: 1-Mar-2019
    • (2017)LEARning Next gEneration Rankers (LEARNER 2017)Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3121050.3121110(331-332)Online publication date: 1-Oct-2017

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media