skip to main content
10.1145/3404835.3462864acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

A Guided Learning Approach for Item Recommendation via Surrogate Loss Learning

Published: 11 July 2021 Publication History

Abstract

Normalized discounted cumulative gain (NDCG) is one of the popular evaluation metrics for recommender systems and learning-to-rank problems. As it is non-differentiable, it cannot be optimized by gradient-based optimization procedures. In the last twenty years, a plethora of surrogate losses have been engineered that aim to make learning recommendation and ranking models that optimize NDCG possible. However, binary relevance implicit feedback settings still pose a significant challenge for such surrogate losses as they are usually designed and evaluated only for multi-level relevance feedback. In this paper, we address the limitations of directly optimizing the NDCG measure by proposing a guided learning approach (GuidedRec) that adopts recent advances in parameterized surrogate losses for NDCG. Starting from the observation that jointly learning a surrogate loss for NDCG and the recommendation model is very unstable, we design a stepwise approach that can be seamlessly applied to any recommender system model that uses a point-wise logistic loss function. The proposed approach guides the models towards optimizing the NDCG using an independent surrogate-loss model trained to approximate the true NDCG measure while maintaining the original logistic loss function as a stabilizer for the guiding procedure. In experiments on three recommendation datasets, we show that our guided surrogate learning approach yields models better optimized for NDCG than recent state-of-the-art approaches using engineered surrogate losses.

References

[1]
Sebastian Bruch, Shuguang Han, Michael Bendersky, and Marc Najork. 2020. A Stochastic Treatment of Learning to Rank Scoring Functions. In Proceedings of the 13th International Conference on Web Search and Data Mining. 61--69.
[2]
Sebastian Bruch, Masrour Zoghi, Michael Bendersky, and Marc Najork. 2019. Revisiting Approximate Metric Optimization in the Age of Deep Neural Networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1241--1244.
[3]
Simon Dooms, Toon De Pessemier, and Luc Martens. 2013. MovieTweetings: a Movie Rating Dataset Collected From Twitter. In Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013.
[4]
Josif Grabocka, Randolf Scholz, and Lars Schmidt-Thieme. 2019. Learning Surrogate Losses. CoRR abs/1905.10108 (2019). arXiv:1905.10108 http://arxiv.org/abs/ 1905.10108
[5]
Aditya Grover, Eric Wang, Aaron Zweig, and Stefano Ermon. 2019. Stochastic Optimization of Sorting Networks via Continuous Relaxations. In International Conference on Learning Representations. https://openreview.net/forum?id= H1eSS3CcKX
[6]
Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 355--364.
[7]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173--182.
[8]
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS) 20, 4 (2002), 422--446.
[9]
Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference. 689--698.
[10]
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017).
[11]
R Duncan Luce. 2012. Individual choice behavior: A theoretical analysis. Courier Corporation.
[12]
Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. CoRR abs/1611.00712 (2016). arXiv:1611.00712 http://arxiv.org/abs/1611.00712
[13]
Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Information Processing Systems. 3086--3094.
[14]
Rama Kumar Pasumarthi, Sebastian Bruch, Xuanhui Wang, Cheng Li, Michael Bendersky, Marc Najork, Jan Pfeifer, Nadav Golbandi, Rohan Anil, and Stephan Wolf. 2019. TF-Ranking: Scalable TensorFlow Library for Learning-to-Rank. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Anchorage, AK). 2970--2978.
[15]
Tao Qin, Tie-Yan Liu, and Hang Li. 2010. A general approximation framework for direct optimization of information retrieval measures. Information retrieval 13, 4 (2010), 375--397.
[16]
Ahmed Rashed, Josif Grabocka, and Lars Schmidt-Thieme. 2019. Attribute-aware non-linear co-embeddings of graph features. In Proceedings of the 13th ACM Conference on Recommender Systems. 314--321.
[17]
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).
[18]
Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise Approach to Learning to Rank: Theory and Algorithm. In Proceedings of the 25th International Conference on Machine Learning (Helsinki, Finland) (ICML '08). Association for Computing Machinery, New York, NY, USA, 1192--1199. https://doi.org/10.1145/1390156.1390306

Cited By

View all
  • (2024)An Explainable Automated Model for Measuring Software Engineer ContributionProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695071(783-794)Online publication date: 27-Oct-2024
  • (2022)Context Aware Recommender Systems: A Novel Approach Based on Matrix Factorization and Contextual BiasElectronics10.3390/electronics1107100311:7(1003)Online publication date: 24-Mar-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGIR '21: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
July 2021
2998 pages
ISBN:9781450380379
DOI:10.1145/3404835
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. collaborative filtering
  2. learning to rank
  3. recommender systems
  4. surrogate loss learning

Qualifiers

  • Research-article

Conference

SIGIR '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 792 of 3,983 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)2
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)An Explainable Automated Model for Measuring Software Engineer ContributionProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695071(783-794)Online publication date: 27-Oct-2024
  • (2022)Context Aware Recommender Systems: A Novel Approach Based on Matrix Factorization and Contextual BiasElectronics10.3390/electronics1107100311:7(1003)Online publication date: 24-Mar-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media