skip to main content
10.1145/3297001.3297010acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
research-article

Enhanced Learning to Rank using Cluster-loss Adjustment

Published: 03 January 2019 Publication History

Abstract

Most Learning To Rank (LTR) algorithms like Ranking SVM, RankNet, LambdaRank and LambdaMART use only relevance label judgments as ground truth for training. But in common scenarios like ranking of information cards (google now, other personal assistants), mobile notifications, netflix recommendations, etc. there is additional information which can be captured from user behavior and how user interacts with the retrieved items. Within the relevance labels, there might be different sets whose information (i.e. cluster information) can be derived implicitly from user interaction (positive, negative, neutral, etc.) or from explicit-user feedback ('Do not show again', 'I like this suggestion', etc). This additional information provides significant knowledge for training any ranking algorithm using two-dimensional output variable. This paper proposes a novel method to use the relevance label along with cluster information to better train the ranking models. Results for user-trial Notification Ranking dataset and standard datasets like LETOR 4.0, MSLR-WEB10K and YahooLTR further support this claim.

References

[1]
Alan Agresti. 2010. Analysis of ordinal categorical data. Vol. 656. John Wiley & Sons.
[2]
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning. ACM, 89--96.
[3]
Christopher JC Burges. 2010. From ranknet to lambdarank to lambdamart: An overview. Learning 11, 23-581 (2010), 81.
[4]
Christopher J Burges, Robert Ragno, and Quoc V Le. 2007. Learning to rank with nonsmooth cost functions. In Advances in neural information processing systems. 193--200.
[5]
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning. ACM, 129--136.
[6]
Yoav Freund, Raj Iyer, Robert E Schapire, and Yoram Singer. 2003. An efficient boosting algorithm for combining preferences. Journal of machine learning research 4, Nov (2003), 933--969.
[7]
Ralf Herbrich. 2000. Large margin rank boundaries for ordinal regression. Advances in large margin classifiers (2000), 115--132.
[8]
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20, 4 (2002), 422--446.
[9]
Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 133--142.
[10]
Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika 30, 1/2 (1938), 81--93.
[11]
Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. 2010. LETOR: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval 13, 4 (2010), 346--374.
[12]
D Sculley. 2009. Large scale learning to rank. In NIPS Workshop on Advances in Ranking. 58--63.
[13]
Tie-Yan Liu Tao Qin. 2010. Microsoft learning to rank datasets. https://www.microsoft.com/en-us/research/project/mslr/.
[14]
Nicolas Usunier, David Buffoni, and Patrick Gallinari. 2009. Ranking with ordered weighted pairwise classification. In Proceedings of the 26th annual international conference on machine learning. ACM, 1057--1064.
[15]
Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th international conference on Machine learning. ACM, 1192--1199.
[16]
Yahoo 2011. Yahoo! Learning to Rank Challenge. Yahoo. https://webscope.sandbox.yahoo.com/catalog.php?datatype=c.
[17]
Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning. ACM, 116.

Cited By

View all
  • (2019)Learning Mobile App Embeddings Using Multi-task Neural NetworkNatural Language Processing and Information Systems10.1007/978-3-030-23281-8_3(29-40)Online publication date: 21-Jun-2019

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CODS-COMAD '19: Proceedings of the ACM India Joint International Conference on Data Science and Management of Data
January 2019
380 pages
ISBN:9781450362078
DOI:10.1145/3297001
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 January 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Information Retrieval
  2. Learning to Rank
  3. Preference Learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CoDS-COMAD '19
CoDS-COMAD '19: 6th ACM IKDD CoDS and 24th COMAD
January 3 - 5, 2019
Kolkata, India

Acceptance Rates

CODS-COMAD '19 Paper Acceptance Rate 62 of 198 submissions, 31%;
Overall Acceptance Rate 197 of 680 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)1
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2019)Learning Mobile App Embeddings Using Multi-task Neural NetworkNatural Language Processing and Information Systems10.1007/978-3-030-23281-8_3(29-40)Online publication date: 21-Jun-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media