skip to main content
10.1145/2513150.2513162acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
demonstration

Lerot: an online learning to rank framework

Published: 01 November 2013 Publication History

Abstract

Online learning to rank methods for IR allow retrieval systems to optimize their own performance directly from interactions with users via click feedback. In the software package Lerot, presented in this paper, we have bundled all ingredients needed for experimenting with online learning to rank for IR. Lerot includes several online learning algorithms, interleaving methods and a full suite of ways to evaluate these methods. In the absence of real users, the evaluation method bundled in the software package is based on simulations of users interacting with the search engine. The software presented here has been used to verify findings of over six papers at major information retrieval venues over the last few years.

References

[1]
C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML '05, 2005.
[2]
O. Chapelle and Y. Chang. Yahoo! Learning to Rank Challenge Overview. Journal of Machine Learning Research, 2011.
[3]
D. Chen, W. Chen, and H. Wang. Beyond ten blue links: enabling user click modeling in federated web search. In WSDM '12, 2012.
[4]
A. Chuklin, A. Schuth, K. Hofmann, P. Serdyukov, and M. de Rijke. Evaluating Aggregated Search Using Interleaving. In CIKM '13, 2013.
[5]
N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In WSDM '08. ACM Press, 2008.
[6]
Guo, Li, and Faloutsos}guo09:tailoringF. Guo, L. Li, and C. Faloutsos. Tailoring click models to user goals. In WSCD '09, 2009.
[7]
Guo, Liu, and Wang}guo09:efficientF. Guo, C. Liu, and Y. M. Wang. Efficient multiple-click models in web search. In WSDM '09, 2009.
[8]
J. He, C. Zhai, and X. Li. Evaluation of methods for relative comparison of retrieval systems based on clickthroughs. In CIKM '09. ACM Press, 2009.
[9]
K. Hofmann. Fast and Reliably Online Learning to Rank for Information Retrieval. PhD thesis, University of Amsterdam, 2013.
[10]
Hofmann, Whiteson, and de Rijke}Hofmann2011K. Hofmann, S. Whiteson, and M. de Rijke. Contextual Bandits for Information Retrieval. NIPS '11, 2011.
[11]
Hofmann, Whiteson, and de Rijke}Hofmann2011aK. Hofmann, S. Whiteson, and M. de Rijke. A Probabilistic Method for Inferring Preferences from Clicks. In CIKM '11. ACM Press, 2011.
[12]
K. Hofmann, S. Whiteson, and M. de Rijke. Estimating Interleaved Comparison Outcomes from Historical Click Data. In CIKM '12, 2012.
[13]
Hofmann, Schuth, Whiteson, and de Rijke}hofmann_2013_reusingK. Hofmann, A. Schuth, S. Whiteson, and M. de Rijke. Reusing Historical Interaction Data for Faster Online Learning to Rank for IR. In WSDM '13, 2013.
[14]
K. Hofmann, S. Whiteson, and M. de Rijke. Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval. Information Retrieval, 16 (1): 63--90, 2013.
[15]
elin and Kekalainen(2002)}jarvelin2002:cumulatedK. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20 (4), 2002.
[16]
T. Joachims. Making large-scale SVM learning practical. In B. Schölkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, 1999.
[17]
T. Joachims. Optimizing search engines using clickthrough data. In KDD '02, 2002.
[18]
T. Joachims. Training linear SVMs in linear time. In KDD '06. ACM Press, 2006.
[19]
T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, and G. Gay. Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search. ACM Transactions on Information Systems, 25 (2), 2007.
[20]
T.-Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3 (3): 225--331, 2009.
[21]
T.-Y. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval. In LR4IR '07, 2007.
[22]
F. Radlinski and N. Craswell. Optimized interleaving for online retrieval evaluation. In WSDM '13. ACM Press, 2013.
[23]
F. Radlinski, M. Kurup, and T. Joachims. How does clickthrough data reflect retrieval quality? In CIKM '08, 2008.
[24]
D. Sculley. Large scale learning to rank. In NIPS 2009 Workshop on Advances in Ranking, 2009.
[25]
Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling bandits problem. In ICML '09, 2009.
[26]
T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML '04. ACM, 2004.

Cited By

View all
  • (2024)Mitigating Exploitation Bias in Learning to Rank with an Uncertainty-aware Empirical Bayes ApproachProceedings of the ACM Web Conference 202410.1145/3589334.3645487(1486-1496)Online publication date: 13-May-2024
  • (2023)Marginal-Certainty-Aware Fair Ranking AlgorithmProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining10.1145/3539597.3570474(24-32)Online publication date: 27-Feb-2023
  • (2023)Combining variable neighborhood with gradient ascent for learning to rank problemNeural Computing and Applications10.1007/s00521-023-08412-435:17(12599-12610)Online publication date: 4-Mar-2023
  • Show More Cited By

Index Terms

  1. Lerot: an online learning to rank framework

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    LivingLab '13: Proceedings of the 2013 workshop on Living labs for information retrieval evaluation
    November 2013
    34 pages
    ISBN:9781450324205
    DOI:10.1145/2513150
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 November 2013

    Check for updates

    Author Tags

    1. experimentation
    2. information retrieval
    3. interleaved comparisons
    4. learning to rank
    5. software

    Qualifiers

    • Demonstration

    Conference

    CIKM'13
    Sponsor:

    Acceptance Rates

    LivingLab '13 Paper Acceptance Rate 7 of 7 submissions, 100%;
    Overall Acceptance Rate 7 of 7 submissions, 100%

    Upcoming Conference

    CIKM '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Mitigating Exploitation Bias in Learning to Rank with an Uncertainty-aware Empirical Bayes ApproachProceedings of the ACM Web Conference 202410.1145/3589334.3645487(1486-1496)Online publication date: 13-May-2024
    • (2023)Marginal-Certainty-Aware Fair Ranking AlgorithmProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining10.1145/3539597.3570474(24-32)Online publication date: 27-Feb-2023
    • (2023)Combining variable neighborhood with gradient ascent for learning to rank problemNeural Computing and Applications10.1007/s00521-023-08412-435:17(12599-12610)Online publication date: 4-Mar-2023
    • (2022)Hybrid online–offline learning to rank using simulated annealing strategy based on dependent click modelKnowledge and Information Systems10.1007/s10115-022-01726-064:10(2833-2847)Online publication date: 8-Aug-2022
    • (2021)Mend the Learning Approach, Not the Data: Insights for Ranking E-Commerce ProductsMachine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track10.1007/978-3-030-67670-4_16(257-272)Online publication date: 25-Feb-2021
    • (2020)MergeDTSACM Transactions on Information Systems10.1145/341175338:4(1-28)Online publication date: 10-Sep-2020
    • (2019)Variance Reduction in Gradient Exploration for Online Learning to RankProceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3331184.3331264(835-844)Online publication date: 18-Jul-2019
    • (2019)Online Learning of Optimally Diverse RankingsACM SIGMETRICS Performance Evaluation Review10.1145/3308809.330883646:1(47-49)Online publication date: 17-Jan-2019
    • (2019)Boosting learning to rank with user dynamics and continuation methodsInformation Retrieval Journal10.1007/s10791-019-09366-9Online publication date: 5-Nov-2019
    • (2018)Efficient Exploration of Gradient Space for Online Learning to RankThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210045(145-154)Online publication date: 27-Jun-2018
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media