Abstract
Contrastive Learning (CL) has recently achieved remarkable performance in recommendation systems, especially in Graph Collaborative Filtering (GCF), due to its effective handling of data sparsity issues by comparing positive and negative sample pairs. In CL-based GCF models, those sample pairs can be created by various data augmentation methods, which can be typically divided into two main aspects: graph-based and feature-based. However, those methods are either slow to train or ignore graph structure during data augmentation. To solve those issues, in this paper, we propose a frequency-based augmentation graph contrastive learning model named FAGCL, which takes graph structure into account without a slow training process. To be specific, FAGCL consists of three key steps. First, we propose a frequency-based data augmentation method to reconstruct the user-item interaction graph and get sample pairs for contrastive learning, which is fast to operate and can filter out some high-frequency graph signals that may lower model’s accuracy. Second, to improve efficiency, we propose an optimized GNNs forward propagation process for CL-based GCF models based on the first step. Third, to avoid extra forward/backward propagation processes, we adopt the one-encoder framework, which combines recommendation and contrastive learning tasks in the same pipeline instead of separating them. Extensive experiments on three benchmark datasets demonstrate that the proposed model FAGCL has the fastest speed of data augmentation and training, and outperforms other CL-based GCF models in accuracy in most cases.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability and Access
Data are available at https://grouplens.org/datasets/movielens, https://www.yelp.com/dataset, and https://snap.stanford.edu/data/loc-gowalla.html.
References
Ricci F, Rokach L, Shapira B (2010) Introduction to recommender systems handbook, Springer, pp 1–35
He X N, Liao L Z, Zhang H W, Nie L Q, Hu X, Chua T S (2017) Acm, In: 26th International Conference on World Wide Web (WWW), Neural Collab Filter, pp 173–182. https://doi.org/10.1145/3038912.3052569
Rendle S, Freudenthaler C, Gantner Z, Schmidt-Thieme L (2009) Bpr: Bayesian personalized ranking from implicit feedback, In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, AUAI Press, Arlington, Virginia, USA, p 452–461
Koren Y (2008) Factorization meets the neighborhood: A multifaceted collaborative filtering model, In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, Association for Computing Machinery, New York, NY, USA, p 426–434. https://doi.org/10.1145/1401890.1401944
Wang X, He X, Wang M, Feng F, Chua T-S (2019) Neural graph collaborative filtering, In: Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, pp 165–174
He X N, Deng K, Wang X, Li Y, Zhang Y D, Wang M (2020) Acm, Lightgcn: Simplifying and powering graph convolution network for recommendation, In: 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp 639–648. https://doi.org/10.1145/3397271.3401063
Mao K, Zhu J, Xiao X, Lu B, Wang Z, He X (2021) Ultragcn: Ultra simplification of graph convolutional networks for recommendation, In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM ’21, Association for Computing Machinery, New York, NY, USA, p 1253–1262
Yu J, Xia X, Chen T, Cui L, Hung N Q V, Yin H (2023) Xsimgcl: Towards extremely simple graph contrastive learning for recommendation, IEEE Trans Knowl Data Eng
Wu J, Wang X, Feng F, He X, Chen L, Lian J, Xie X (2020) Self-supervised graph learning for recommendation, In: Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pp 726–735
Lin Z, Tian C, Hou Y, Zhao WX (2022) Improving graph collaborative filtering with neighborhood-enriched contrastive learning. Proceed ACM Web Conf 2022:2320–2329
Cai X, Huang C, Xia L, Ren X (2023) Lightgcl: Simple yet effective graph contrastive learning for recommendation, In: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023
Xu Y, Wang Z, Wang Z, Guo Y, Fan R, Tian H, Wang X (2023) Simdcl: dropout-based simple graph contrastive learning for recommendation, Complex Intell Syst 1–13
Yu J, Yin H, Xia X, Chen T, Cui L, Nguyen Q V H (2021) Are graph augmentations necessary? simple graph contrastive learning for recommendation, In: Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pp 1294–1303
Xia L, Huang C, Xu Y, Zhao J, Yin D, Huang J (2022) Hypergraph contrastive collaborative filtering, In: Proceedings of the 45th International ACM SIGIR conference on research and development in information retrieval, pp 70–79
Yu J, Yin H, Xia X, Chen T, Li J, Huang Z (2023) Self-supervised learning for recommender systems: A survey, IEEE Trans Knowl Data Eng
Gutmann M, Hyvärinen A (2010) Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, JMLR Workshop and Conference Proceedings, pp 297–304
Halko N, Martinsson P-G, Tropp JA (2011) Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev 53(2):217–288
van den Berg R, Kipf T N, Welling M (2017) Graph convolutional matrix completion, arXiv:1706.02263
Bo D, Wang X, Shi C, Shen H (2021) Beyond low-frequency information in graph convolutional networks, In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol 35, pp 3950–3957
NT H, Maehara T, Murata T (2021) Revisiting graph neural networks: Graph filtering perspective, In: 2020 25th International Conference on Pattern Recognition (ICPR), pp 8376–8383. https://doi.org/10.1109/ICPR48806.2021.9412278
Jiang B, Zhang Z, Lin D, Tang J, Luo B (2019) Semi-supervised learning with graph learning-convolutional networks, in. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:11305–11312. https://doi.org/10.1109/CVPR.2019.01157
Peng S, Sugiyama K, Mine T (2022) Less is more: Reweighting important spectral graph features for recommendation, In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, Association for Computing Machinery, New York, NY, USA, p 1273–1282. https://doi.org/10.1145/3477495.3532014
Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding, In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 4171–4186. https://aclanthology.org/N19-1423https://doi.org/10.18653/v1/N19-1423
Chen S, Dobriban E, Lee J H (2020) A group-theoretic framework for data augmentation, J Mach Learn Res (jan 2020) 21 (1)
Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks, In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, JMLR Workshop and Conference Proceedings, pp 315–323
Harper F M, Konstan J A (2015) The movielens datasets: History and context, ACM Trans Interact Intell Syst (dec 2015) 5 (4)
Cho E, Myers S A, Leskovec J (2011) Friendship and mobility: User movement in location-based social networks, In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, Association for Computing Machinery, New York, NY, USA, p 1082–1090. https://doi.org/10.1145/2020408.2020579
Zhao J, Huang K, Li P (2023) Dual channel group-aware graph convolutional networks for collaborative filtering. Applied Intell 53(21):25511–25524. https://doi.org/10.1007/s10489-023-04860-6
Jian M, Lang L, Guo J, Li Z, Wang T, Wu L (2024) Light dual hypergraph convolution for collaborative filtering, Pattern Recogn 154 (OCT 2024). https://doi.org/10.1016/j.patcog.2024.110596
Zhao W X, Mu S, Hou Y, Lin Z, Chen Y, Pan X, Li K, Lu Y, Wang H, Tian C, Min Y, Feng Z, Fan X, Chen X, Wang P, Ji W, Li Y, Wang X, Wen J-R, Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms, CIKM ’21, Ass Comput Mach, New York, NY, USA, p 4653–4664. https://doi.org/10.1145/3459637.3482016
Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. J Mach Learn Res - Proceed Track 9:249–256
Kingma D P, Ba J (2014) Adam: A method for stochastic optimization, CoRR arXiv:1412.6980 . https://api.semanticscholar.org/CorpusID:6628106
Yao Y, Rosasco L, Caponnetto A (2007) On early stopping in gradient descent learning. Const Approx 26(2):289–315
Raskutti G, Wainwright M J, Yu B (2011) Early stopping for non-parametric regression: An optimal data-dependent stopping rule, In: 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp 1318–1325. https://doi.org/10.1109/Allerton.2011.6120320
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958
Wang T, Isola P (2020) Understanding contrastive representation learning through alignment and uniformity on the hypersphere, In: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol 119 of Proceedings of Machine Learning Research, PMLR, pp 9929–9939. https://proceedings.mlr.press/v119/wang20k.html
van der Maaten L, Hinton G (2008) Visualizing data using t-sne, J Mach Learn Res 9 (86):2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.html
Gilmer J, Schoenholz S S, Riley P F, Vinyals O, Dahl G E (2017) Neural message passing for quantum chemistry, In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, JMLR.org, p 1263–1272
Hamilton WL, Ying R, Leskovec J (2017) Inductive representation learning on large graphs, NIPS’17. Curran Associates Inc., Red Hook, NY, USA, pp 1025–1035
Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2018) Graph attention networks, In: International Conference on Learning Representations, https://openreview.net/forum?id=rJXMpikCZ
Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations, In: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol 119 of Proceedings of Machine Learning Research, PMLR, pp 1597–1607. https://proceedings.mlr.press/v119/chen20j.html
Devlin J, Chang M-W, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding, In: North American Chapter of the Association for Computational Linguistics, https://api.semanticscholar.org/CorpusID:52967399
van den Oord A, Li Y, Vinyals O (2018) Representation learning with contrastive pred fictive coding, CoRR arXiv:1807.03748
He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
Wang D, Ding N, Li P, Zheng H (2021) CLINE: contrastive learning with semantic negative examples for natural language understanding, In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp 2332–2342
Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2020) Albert: A lite bert for self-supervised learning of language representations, In: International Conference on Learning Representations, https://openreview.net/forum?id=H1eA7AEtvS
Acknowledgements
This work is supported by Natural Science Foundation of Sichuan Province (Project No. 2024NSFSC0502). Besides, we are grateful to Assistant Prof. Jingcao Yu from the School of Foreign Languages at the University of Electronic Science and Technology of China for her assistance in proofreading this paper.
Author information
Authors and Affiliations
Contributions
Jingyu Xu: Methodology, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing. Bo Yang: Conceptualization, Methodology, Resources, Writing - Review & Editing, Supervision, Project administration, Funding acquisition. Zimu Li: Validation. Wei Liu: Writing - Review & Editing. Hao Qiao: Validation.
Corresponding author
Ethics declarations
Competing Interests
The authors declare that they have no conflicts of interests.
Ethical and informed consent for data used
The datasets used and/or analyzed during the current study are publicly available from the links provided in the paper, i.e., https://grouplens.org/datasets/movielens, https://www.yelp.com/dataset, and https://snap.stanford.edu/data/loc-gowalla.html.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xu, J., Yang, B., Li, Z. et al. FAGCL: frequency-based augmentation graph contrastive learning for recommendation. Appl Intell 55, 44 (2025). https://doi.org/10.1007/s10489-024-05857-5
Accepted:
Published:
DOI: https://doi.org/10.1007/s10489-024-05857-5