Skip to main content

Advertisement

FAGCL: frequency-based augmentation graph contrastive learning for recommendation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Contrastive Learning (CL) has recently achieved remarkable performance in recommendation systems, especially in Graph Collaborative Filtering (GCF), due to its effective handling of data sparsity issues by comparing positive and negative sample pairs. In CL-based GCF models, those sample pairs can be created by various data augmentation methods, which can be typically divided into two main aspects: graph-based and feature-based. However, those methods are either slow to train or ignore graph structure during data augmentation. To solve those issues, in this paper, we propose a frequency-based augmentation graph contrastive learning model named FAGCL, which takes graph structure into account without a slow training process. To be specific, FAGCL consists of three key steps. First, we propose a frequency-based data augmentation method to reconstruct the user-item interaction graph and get sample pairs for contrastive learning, which is fast to operate and can filter out some high-frequency graph signals that may lower model’s accuracy. Second, to improve efficiency, we propose an optimized GNNs forward propagation process for CL-based GCF models based on the first step. Third, to avoid extra forward/backward propagation processes, we adopt the one-encoder framework, which combines recommendation and contrastive learning tasks in the same pipeline instead of separating them. Extensive experiments on three benchmark datasets demonstrate that the proposed model FAGCL has the fastest speed of data augmentation and training, and outperforms other CL-based GCF models in accuracy in most cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability and Access

Data are available at https://grouplens.org/datasets/movielens, https://www.yelp.com/dataset, and https://snap.stanford.edu/data/loc-gowalla.html.

Notes

  1. https://www.yelp.com/dataset

  2. https://tianchi.aliyun.com/dataset/53

  3. https://github.com/RUCAIBox/RecBole

References

  1. Ricci F, Rokach L, Shapira B (2010) Introduction to recommender systems handbook, Springer, pp 1–35

  2. He X N, Liao L Z, Zhang H W, Nie L Q, Hu X, Chua T S (2017) Acm, In: 26th International Conference on World Wide Web (WWW), Neural Collab Filter, pp 173–182. https://doi.org/10.1145/3038912.3052569

  3. Rendle S, Freudenthaler C, Gantner Z, Schmidt-Thieme L (2009) Bpr: Bayesian personalized ranking from implicit feedback, In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, AUAI Press, Arlington, Virginia, USA, p 452–461

  4. Koren Y (2008) Factorization meets the neighborhood: A multifaceted collaborative filtering model, In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, Association for Computing Machinery, New York, NY, USA, p 426–434. https://doi.org/10.1145/1401890.1401944

  5. Wang X, He X, Wang M, Feng F, Chua T-S (2019) Neural graph collaborative filtering, In: Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, pp 165–174

  6. He X N, Deng K, Wang X, Li Y, Zhang Y D, Wang M (2020) Acm, Lightgcn: Simplifying and powering graph convolution network for recommendation, In: 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp 639–648. https://doi.org/10.1145/3397271.3401063

  7. Mao K, Zhu J, Xiao X, Lu B, Wang Z, He X (2021) Ultragcn: Ultra simplification of graph convolutional networks for recommendation, In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM ’21, Association for Computing Machinery, New York, NY, USA, p 1253–1262

  8. Yu J, Xia X, Chen T, Cui L, Hung N Q V, Yin H (2023) Xsimgcl: Towards extremely simple graph contrastive learning for recommendation, IEEE Trans Knowl Data Eng

  9. Wu J, Wang X, Feng F, He X, Chen L, Lian J, Xie X (2020) Self-supervised graph learning for recommendation, In: Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pp 726–735

  10. Lin Z, Tian C, Hou Y, Zhao WX (2022) Improving graph collaborative filtering with neighborhood-enriched contrastive learning. Proceed ACM Web Conf 2022:2320–2329

    MATH  Google Scholar 

  11. Cai X, Huang C, Xia L, Ren X (2023) Lightgcl: Simple yet effective graph contrastive learning for recommendation, In: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023

  12. Xu Y, Wang Z, Wang Z, Guo Y, Fan R, Tian H, Wang X (2023) Simdcl: dropout-based simple graph contrastive learning for recommendation, Complex Intell Syst 1–13

  13. Yu J, Yin H, Xia X, Chen T, Cui L, Nguyen Q V H (2021) Are graph augmentations necessary? simple graph contrastive learning for recommendation, In: Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pp 1294–1303

  14. Xia L, Huang C, Xu Y, Zhao J, Yin D, Huang J (2022) Hypergraph contrastive collaborative filtering, In: Proceedings of the 45th International ACM SIGIR conference on research and development in information retrieval, pp 70–79

  15. Yu J, Yin H, Xia X, Chen T, Li J, Huang Z (2023) Self-supervised learning for recommender systems: A survey, IEEE Trans Knowl Data Eng

  16. Gutmann M, Hyvärinen A (2010) Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, JMLR Workshop and Conference Proceedings, pp 297–304

  17. Halko N, Martinsson P-G, Tropp JA (2011) Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev 53(2):217–288

    Article  MathSciNet  MATH  Google Scholar 

  18. van den Berg R, Kipf T N, Welling M (2017) Graph convolutional matrix completion, arXiv:1706.02263

  19. Bo D, Wang X, Shi C, Shen H (2021) Beyond low-frequency information in graph convolutional networks, In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol 35, pp 3950–3957

  20. NT H, Maehara T, Murata T (2021) Revisiting graph neural networks: Graph filtering perspective, In: 2020 25th International Conference on Pattern Recognition (ICPR), pp 8376–8383. https://doi.org/10.1109/ICPR48806.2021.9412278

  21. Jiang B, Zhang Z, Lin D, Tang J, Luo B (2019) Semi-supervised learning with graph learning-convolutional networks, in. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:11305–11312. https://doi.org/10.1109/CVPR.2019.01157

    Article  MATH  Google Scholar 

  22. Peng S, Sugiyama K, Mine T (2022) Less is more: Reweighting important spectral graph features for recommendation, In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, Association for Computing Machinery, New York, NY, USA, p 1273–1282. https://doi.org/10.1145/3477495.3532014

  23. Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding, In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 4171–4186. https://aclanthology.org/N19-1423https://doi.org/10.18653/v1/N19-1423

  24. Chen S, Dobriban E, Lee J H (2020) A group-theoretic framework for data augmentation, J Mach Learn Res (jan 2020) 21 (1)

  25. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks, In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, JMLR Workshop and Conference Proceedings, pp 315–323

  26. Harper F M, Konstan J A (2015) The movielens datasets: History and context, ACM Trans Interact Intell Syst (dec 2015) 5 (4)

  27. Cho E, Myers S A, Leskovec J (2011) Friendship and mobility: User movement in location-based social networks, In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, Association for Computing Machinery, New York, NY, USA, p 1082–1090. https://doi.org/10.1145/2020408.2020579

  28. Zhao J, Huang K, Li P (2023) Dual channel group-aware graph convolutional networks for collaborative filtering. Applied Intell 53(21):25511–25524. https://doi.org/10.1007/s10489-023-04860-6

    Article  Google Scholar 

  29. Jian M, Lang L, Guo J, Li Z, Wang T, Wu L (2024) Light dual hypergraph convolution for collaborative filtering, Pattern Recogn 154 (OCT 2024). https://doi.org/10.1016/j.patcog.2024.110596

  30. Zhao W X, Mu S, Hou Y, Lin Z, Chen Y, Pan X, Li K, Lu Y, Wang H, Tian C, Min Y, Feng Z, Fan X, Chen X, Wang P, Ji W, Li Y, Wang X, Wen J-R, Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms, CIKM ’21, Ass Comput Mach, New York, NY, USA, p 4653–4664. https://doi.org/10.1145/3459637.3482016

  31. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. J Mach Learn Res - Proceed Track 9:249–256

    MATH  Google Scholar 

  32. Kingma D P, Ba J (2014) Adam: A method for stochastic optimization, CoRR arXiv:1412.6980 . https://api.semanticscholar.org/CorpusID:6628106

  33. Yao Y, Rosasco L, Caponnetto A (2007) On early stopping in gradient descent learning. Const Approx 26(2):289–315

    Article  MathSciNet  MATH  Google Scholar 

  34. Raskutti G, Wainwright M J, Yu B (2011) Early stopping for non-parametric regression: An optimal data-dependent stopping rule, In: 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp 1318–1325. https://doi.org/10.1109/Allerton.2011.6120320

  35. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  36. Wang T, Isola P (2020) Understanding contrastive representation learning through alignment and uniformity on the hypersphere, In: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol 119 of Proceedings of Machine Learning Research, PMLR, pp 9929–9939. https://proceedings.mlr.press/v119/wang20k.html

  37. van der Maaten L, Hinton G (2008) Visualizing data using t-sne, J Mach Learn Res 9 (86):2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.html

  38. Gilmer J, Schoenholz S S, Riley P F, Vinyals O, Dahl G E (2017) Neural message passing for quantum chemistry, In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, JMLR.org, p 1263–1272

  39. Hamilton WL, Ying R, Leskovec J (2017) Inductive representation learning on large graphs, NIPS’17. Curran Associates Inc., Red Hook, NY, USA, pp 1025–1035

    MATH  Google Scholar 

  40. Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2018) Graph attention networks, In: International Conference on Learning Representations, https://openreview.net/forum?id=rJXMpikCZ

  41. Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations, In: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol 119 of Proceedings of Machine Learning Research, PMLR, pp 1597–1607. https://proceedings.mlr.press/v119/chen20j.html

  42. Devlin J, Chang M-W, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding, In: North American Chapter of the Association for Computational Linguistics, https://api.semanticscholar.org/CorpusID:52967399

  43. van den Oord A, Li Y, Vinyals O (2018) Representation learning with contrastive pred fictive coding, CoRR arXiv:1807.03748

  44. He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),

  45. Wang D, Ding N, Li P, Zheng H (2021) CLINE: contrastive learning with semantic negative examples for natural language understanding, In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp 2332–2342

  46. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2020) Albert: A lite bert for self-supervised learning of language representations, In: International Conference on Learning Representations, https://openreview.net/forum?id=H1eA7AEtvS

Download references

Acknowledgements

This work is supported by Natural Science Foundation of Sichuan Province (Project No. 2024NSFSC0502). Besides, we are grateful to Assistant Prof. Jingcao Yu from the School of Foreign Languages at the University of Electronic Science and Technology of China for her assistance in proofreading this paper.

Author information

Authors and Affiliations

Authors

Contributions

Jingyu Xu: Methodology, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing. Bo Yang: Conceptualization, Methodology, Resources, Writing - Review & Editing, Supervision, Project administration, Funding acquisition. Zimu Li: Validation. Wei Liu: Writing - Review & Editing. Hao Qiao: Validation.

Corresponding author

Correspondence to Bo Yang.

Ethics declarations

Competing Interests

The authors declare that they have no conflicts of interests.

Ethical and informed consent for data used

The datasets used and/or analyzed during the current study are publicly available from the links provided in the paper, i.e., https://grouplens.org/datasets/movielens, https://www.yelp.com/dataset, and https://snap.stanford.edu/data/loc-gowalla.html.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, J., Yang, B., Li, Z. et al. FAGCL: frequency-based augmentation graph contrastive learning for recommendation. Appl Intell 55, 44 (2025). https://doi.org/10.1007/s10489-024-05857-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05857-5

Keywords