Skip to main content

Diffusion-Based Graph Contrastive Learning for Recommendation with Implicit Feedback

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13246))

Included in the following conference series:

Abstract

Recent studies on self-supervised learning with graph-based recommendation models have achieved outstanding performance. They usually introduce auxiliary learning tasks that maximize the mutual information between representations of the original graph and its augmented views. However, most of these models adopt random dropout to construct the additional graph view, failing to differentiate the importance of edges. The insufficiency of these methods in capturing structural properties of the user-item interaction graph leads to suboptimal recommendation performance. In this paper, we propose a Graph Diffusion Contrastive Learning (GDCL) framework for recommendation to close this gap. Specifically, we perform graph diffusion on the user-item interaction graph. Then, the diffusion graph is encoded to preserve its heterogeneity by learning a dedicated representation for every type of relations. A symmetric contrastive learning objective is used to contrast local node representations of the diffusion graph with those of the user-item interaction graph for learning better user and item representations. Extensive experiments on real datasets demonstrate that GDCL consistently outperforms state-of-the-art recommendation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://grouplens.org/datasets/movielens/1m/.

  2. 2.

    https://www.yelp.com/dataset.

References

  1. Andersen, R., Chung, F., Lang, K.: Local graph partitioning using PageRank vectors. In: FOCS 2006, pp. 475–486 (2006)

    Google Scholar 

  2. Bojchevski, A., et al.: Scaling graph neural networks with approximate PageRank. In: KDD 2020, pp. 2464–2473 (2020)

    Google Scholar 

  3. Cao, J., Lin, X., Guo, S., Liu, L., Liu, T., Wang, B.: Bipartite graph embedding via mutual information maximization. In: WSDM 2021, pp. 635–643 (2021)

    Google Scholar 

  4. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS 2010, pp. 249–256 (2010)

    Google Scholar 

  5. Han, X., Shi, C., Wang, S., Yu, P.S., Song, L.: Aspect-level deep collaborative filtering via heterogeneous information networks. In: IJCAI 2018, pp. 3393–3399 (2018)

    Google Scholar 

  6. Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: ICML 2020, pp. 4116–4126 (2020)

    Google Scholar 

  7. He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: LightGCN: simplifying and powering graph convolution network for recommendation. In: SIGIR 2020 (2020)

    Google Scholar 

  8. Jiang, Z., Liu, H., Fu, B., Wu, Z., Zhang, T.: Recommendation in heterogeneous information networks based on generalized random walk model and Bayesian personalized ranking. In: WSDM 2018, pp. 288–296 (2018)

    Google Scholar 

  9. Jin, W., et al.: Self-supervised learning on graphs: deep insights and new direction. arXiv preprint arXiv:2006.10141 (2020)

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  11. Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then propagate: graph neural networks meet personalized PageRank. arXiv:1810.05997 (2018)

  12. Klicpera, J., Weißenberger, S., Günnemann, S.: Diffusion improves graph learning. In: NeurIPS 2019 (2019)

    Google Scholar 

  13. Lee, D., Kang, S., Ju, H., Park, C., Yu, H.: Bootstrapping user and item representations for one-class collaborative filtering. In: SIGIR 2021, pp. 317–326 (2021)

    Google Scholar 

  14. Lei, C., et al.: SEMI: a sequential multi-modal information transfer network for e-commerce micro-video recommendations. In: KDD 2021, pp. 3161–3171 (2021)

    Google Scholar 

  15. Liu, Y., et al.: Pre-training graph transformer with multimodal side information for recommendation. In: MM 2021, pp. 2853–2861 (2021)

    Google Scholar 

  16. Liu, Y., Yang, S., Xu, Y., Miao, C., Wu, M., Zhang, J.: Contextualized graph attention network for recommendation with item knowledge graph. TKDE (2021)

    Google Scholar 

  17. Liu, Y., Yang, S., Zhang, Y., Miao, C., Nie, Z., Zhang, J.: Learning hierarchical review graph representations for recommendation. TKDE (2021)

    Google Scholar 

  18. Liu, Z., Ma, Y., Ouyang, Y., Xiong, Z.: Contrastive learning for recommender system. arXiv:2101.01317 (2021)

  19. Ni, J., Li, J., McAuley, J.: Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In: EMNLP-IJCNLP 2019, pp. 188–197 (2019)

    Google Scholar 

  20. Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: Bringing order to the web. Technical report 1999-66 (1999)

    Google Scholar 

  21. Paszke, A., Gross, S., Massa, F., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035 (2019)

    Google Scholar 

  22. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. arXiv:1205.2618 (2012)

  23. Kondor, R.I., Lafferty, J.D.: Diffusion kernels on graphs and other discrete input spaces. In: ICML 2002 (2002)

    Google Scholar 

  24. Tang, H., Zhao, G., Wu, Y., Qian, X.: Multi-sample based contrastive loss for top-k recommendation. arXiv:2109.00217 (2021)

  25. Wang, X., He, X., Cao, Y., Liu, M., Chua, T.S.: KGAT: knowledge graph attention network for recommendation. In: KDD 2019, pp. 950–958 (2019)

    Google Scholar 

  26. Wu, J., et al.: Self-supervised graph learning for recommendation. arXiv:2010.10783 (2021)

  27. You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., Shen, Y.: Graph contrastive learning with augmentations. In: NeurIPS 2020, vol. 33, pp. 5812–5823 (2020)

    Google Scholar 

  28. Zhang, Y., Li, B., Liu, Y., Miao, C.: Initialization matters: regularizing manifold-informed initialization for neural recommendation systems. In: KDD 2021, pp. 2263–2273 (2021)

    Google Scholar 

  29. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: WWW 2021, pp. 2069–2080 (2021)

    Google Scholar 

Download references

Acknowledgments

This research is supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunyan Miao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, L., Liu, Y., Zhou, X., Miao, C., Wang, G., Tang, H. (2022). Diffusion-Based Graph Contrastive Learning for Recommendation with Implicit Feedback. In: Bhattacharya, A., et al. Database Systems for Advanced Applications. DASFAA 2022. Lecture Notes in Computer Science, vol 13246. Springer, Cham. https://doi.org/10.1007/978-3-031-00126-0_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-00126-0_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-00125-3

  • Online ISBN: 978-3-031-00126-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics