Skip to main content

Cache-Enhanced InBatch Sampling with Difficulty-Based Replacement Strategies for Learning Recommenders

  • Conference paper
  • First Online:
Database Systems for Advanced Applications. DASFAA 2023 International Workshops (DASFAA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13922))

Included in the following conference series:

  • 276 Accesses

Abstract

Negative sampling techniques are prevalent in learning recommenders to reduce the computational cost over the entire corpus, but existing methods still have a significant overhead for re-encoding out-of-batch items. Inbatch sampling is a more practical strategy that regards items in the mini-batch as negatives, although it suffers from exposure bias. Several researchers attempt to alleviate the bias by cache mechanism, which supplements more items for better approximation, but none of them sufficiently evaluate the information level of different items and further exploit them. In this paper, we propose a Cache-Enhanced InBatch Sampling with Difficulty-Based Replacement Strategy for Learning Recommenders that heuristically and adaptively updates the cache depending on the designed training difficulty of negative samples. Specifically, the cache is updated based on the average and standard deviation with respect to the training difficulty, which correspond with the estimated first-order and second-order moments, in which way the items with high averages and high uncertainties have a higher probability of being restored. Thus, the historical informative items in training are more effectively explored and exploited, leading to superior and rapid convergence. The proposed DBRS is evaluated on four real-world datasets and outperforms the existing state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, Y., Goldman, S., Zhang, L.: TAPAS: two-pass approximate adaptive sampling for softmax. arXiv preprint arXiv:1707.03073 (2017)

  2. Bengio, Y., Senécal, J.S.: Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Trans. Neural Networks 19(4), 713–722 (2008)

    Article  Google Scholar 

  3. Blanc, G., Rendle, S.: Adaptive sampled softmax with kernel based sampling. In: International Conference on Machine Learning, pp. 589–598 (2018)

    Google Scholar 

  4. Burges, C.J.: From RankNet to LambdaRank to LambdaMART: an overview. Learning 11(23–581), 81 (2010)

    Google Scholar 

  5. Chen, J., Lian, D., Li, Y., Wang, B., Zheng, K., Chen, E.: Cache-augmented inbatch importance resampling for training recommender retriever. arXiv preprint arXiv:2205.14859 (2022)

  6. Cheng, H.T., et al.: Wide & deep learning for recommender systems. In: Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pp. 7–10 (2016)

    Google Scholar 

  7. Covington, P., Adams, J., Sargin, E.: Deep neural networks for Youtube recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 191–198 (2016)

    Google Scholar 

  8. Ding, J., Quan, Y., Yao, Q., Li, Y., Jin, D.: Simplify and robustify negative sampling for implicit collaborative filtering. Adv. Neural. Inf. Process. Syst. 33, 1094–1105 (2020)

    Google Scholar 

  9. Gomez-Uribe, C.A., Hunt, N.: The Netflix recommender system: algorithms, business value, and innovation. ACM Trans. Manage. Inf. Syst. (TMIS) 6(4), 1–19 (2015)

    Google Scholar 

  10. Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D.: Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015)

  11. Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272. IEEE (2008)

    Google Scholar 

  12. Jean, S., Cho, K., Memisevic, R., Bengio, Y.: On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007 (2014)

  13. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  14. Lian, D., Liu, Q., Chen, E.: Personalized ranking with importance sampling. In: Proceedings of the Web Conference 2020, pp. 1093–1103 (2020)

    Google Scholar 

  15. Lindgren, E., Reddi, S., Guo, R., Kumar, S.: Efficient training of retrieval models using negative cache. Adv. Neural. Inf. Process. Syst. 34, 4134–4146 (2021)

    Google Scholar 

  16. Pan, R., et al.: One-class collaborative filtering. In: Proceedings of ICDM 2008, pp. 502–511. IEEE (2008)

    Google Scholar 

  17. Rendle, S.: Factorization machines. In: 2010 IEEE International Conference on Data Mining, pp. 995–1000. IEEE (2010)

    Google Scholar 

  18. Rendle, S., Freudenthaler, C.: Improving pairwise learning for item recommendation from implicit feedback. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining, pp. 273–282 (2014)

    Google Scholar 

  19. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012)

  20. Wang, J., Zhu, J., He, X.: Cross-batch negative sampling for training two-tower recommenders. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1632–1636 (2021)

    Google Scholar 

  21. Wang, J., et al.: IRGAN: a minimax game for unifying generative and discriminative information retrieval models. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 515–524 (2017)

    Google Scholar 

  22. Weston, J., Bengio, S., Usunier, N.: Large scale image annotation: learning to rank with joint word-image embeddings. Mach. Learn. 81(1), 21–35 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Yang, J., et al.: Mixed negative sampling for learning two-tower neural networks in recommendations. In: Companion Proceedings of the Web Conference 2020, pp. 441–447 (2020)

    Google Scholar 

  24. Yi, X., et al.: Sampling-bias-corrected neural modeling for large corpus item recommendations. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 269–277 (2019)

    Google Scholar 

  25. Zhang, W., Chen, T., Wang, J., Yu, Y.: Optimizing top-n collaborative filtering via dynamic negative item sampling. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 785–788 (2013)

    Google Scholar 

  26. Zhou, C., Ma, J., Zhang, J., Zhou, J., Yang, H.: Contrastive learning for debiased candidate generation in large-scale recommender systems. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 3985–3995 (2021)

    Google Scholar 

  27. Zhu, Q., Zhang, H., He, Q., Dou, Z.: A gain-tuning dynamic negative sampler for recommendation. In: Proceedings of the ACM Web Conference 2022, pp. 277–285 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Defu Lian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Lian, D., Chen, J. (2023). Cache-Enhanced InBatch Sampling with Difficulty-Based Replacement Strategies for Learning Recommenders. In: El Abbadi, A., et al. Database Systems for Advanced Applications. DASFAA 2023 International Workshops. DASFAA 2023. Lecture Notes in Computer Science, vol 13922. Springer, Cham. https://doi.org/10.1007/978-3-031-35415-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35415-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35414-4

  • Online ISBN: 978-3-031-35415-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics