Skip to main content
Log in

Self-supervised graph representation learning using multi-scale subgraph views contrast

Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Graph representation learning has received widespread attention in recent years. Most of the existing graph representation learning methods are based on supervised learning and require the complete graph as input. It needs a lot of computation memory cost. Besides, real-world graph data lacks labels and the cost of manually labeling data is expensive. Self-supervised learning provides a potential solution for graph representation learning to address these issues. Recently, multi-scale and multi-level self-supervised contrastive methods have been successfully applied. But most of these methods operate on complete graph data. Although the subgraph contrastive strategy improves the shortcomings of the previous self-supervised contrastive learning method, these subgraph contrastive methods only use a single contrastive strategy, which cannot fully extract the information in the graph. To approach these problems, in this paper, we introduce a novel self-supervised contrastive framework for graph representation learning. We generate multi-subgraph views for all nodes by a mixed sampling method. Our method learns node representation by a multi-scale contrastive loss. Specifically, we employ two objectives called bootstrapping contrastive loss and node-level agreement contrastive loss to maximize the node agreement between different subgraph views of the same node. Extensive experiments prove that compared with the state-of-the-art graph representation learning methods, our method is superior to a range of existing models in node classification task and computation memory costs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Asano YM, Rupprecht C, Vedaldi A (2019) A critical analysis of self-supervision, or what we can learn from a single image. arXiv preprint arXiv:1904.13132

  2. Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, et al (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165

  3. Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International conference on machine learning, pp 1597–1607

  4. Chiang WL, Liu X, Si S, Li Y, Bengio S, Hsieh CJ (2019) Cluster-gcn: an efficient algorithm for training deep and large graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, pp 257–266

  5. Cong W, Forsati R, Kandemir M, Mahdavi M (2020) Minimal variance sampling with provable guarantees for fast training of graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1393–1403

  6. Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast localized spectral filtering. Adv Neural Inf Process Syst 29:3844–3852

    Google Scholar 

  7. Deniz Köse Ö, Shen Y (2021) Fairness-aware node representation learning. arXiv e-prints pp. arXiv-2106

  8. Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805

  9. Fey M, Lenssen JE (2019) Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428

  10. Gori M, Monfardini G, Scarselli F (2005) A new model for learning in graph domains. In: Proceedings. 2005 IEEE international joint conference on neural networks, vol 2, pp 729–734. IEEE

  11. Hamilton WL, Ying R, Leskovec J (2017) Inductive representation learning on large graphs. In: Proceedings of the 31st international conference on neural information processing systems, pp 1025–1035

  12. Hamilton WL, Ying R, Leskovec J (2017) Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584

  13. Hassani K, Khasahmadi AH (2020) Contrastive multi-view representation learning on graphs. In: International conference on machine learning, pp 4116–4126

  14. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034

  15. Hu F, Zhu Y, Wu S, Wang L, Tan T (2019) Hierarchical graph convolutional networks for semi-supervised node classification. arXiv preprint arXiv:1902.06667

  16. Ji X, Henriques JF, Vedaldi A (2019) Invariant information clustering for unsupervised image classification and segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9865–9874

  17. Jiao Y, Xiong Y, Zhang J, Zhang Y, Zhang T, Zhu Y (2020) Sub-graph contrast for scalable self-supervised graph representation learning. In: 2020 IEEE international conference on data mining (ICDM), pp 222–231. IEEE

  18. Jin M, Zheng Y, Li YF, Gong C, Zhou C, Pan S (2021) Multi-scale contrastive siamese networks for self-supervised graph representation learning. arXiv preprint arXiv:2105.05682

  19. Jin W, Liu X, Zhao X, Ma Y, Shah N, Tang J (2021) Automated self-supervised learning for graphs. arXiv preprint arXiv:2106.05470

  20. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  21. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907

  22. Kumar A, Singh SS, Singh K, Biswas B (2020) Link prediction techniques, applications, and performance: a survey. Physica A: Stat Mech Appl 553:124289

  23. Li B, Pi D (2020) Network representation learning: a systematic literature review. Neural Comput Appl, pp 1–33

  24. Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11)

  25. Mernyei P, Cangea C (2020) Wiki-cs: a wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901

  26. Ou M, Cui P, Pei J, Zhang Z, Zhu W (2016) Asymmetric transitivity preserving graph embedding. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp 1105–1114

  27. Peng Z, Huang W, Luo M, Zheng Q, Rong Y, Xu T, Huang J (2020) Graph representation learning via graphical mutual information maximization. In: Proceedings of the web conference 2020, pp 259–270

  28. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 701–710

  29. Qiu J, Chen Q, Dong Y, Zhang J, Yang H, Ding M, Wang K, Tang J (2020) Gcc: graph contrastive coding for graph neural network pre-training. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1150–1160

  30. Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2008) The graph neural network model. IEEE Trans Neural Netw 20(1):61–80

    Article  Google Scholar 

  31. Shao P, Liu T, Zhang D, Tao J, Che F, Yang G (2021) Multi-level graph contrastive learning. arXiv preprint arXiv:2107.02639

  32. Shchur O, Mumme M, Bojchevski A, Günnemann S (2018) Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868

  33. Shervashidze N, Schweitzer P, Van Leeuwen EJ, Mehlhorn K, Borgwardt KM (2011) Weisfeiler-lehman graph kernels. J Mach Learn Res 12(9)

  34. Sun M, Xing J, Wang H, Chen B, Zhou J (2021) Mocl: contrastive learning on molecular graphs with multi-level domain knowledge. arXiv preprint arXiv:2106.04509

  35. Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: Large-scale information network embedding. In: Proceedings of the 24th international conference on world wide web, pp 1067–1077

  36. Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks. arXiv preprint arXiv:1710.10903

  37. Veličković P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD (2018) Deep graph infomax. arXiv preprint arXiv:1809.10341

  38. Wan S, Pan S, Yang J, Gong C (2020) Contrastive and generative graph convolutional networks for graph-based semi-supervised learning. arXiv preprint arXiv:2009.07111

  39. Wang Y, Cao J, Tao H (2021) Graph convolutional network with multi-similarity attribute matrices fusion for node classification. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06429-1

  40. Wang Y, Wang J, Cao Z, Farimani AB (2021) Molclr: Molecular contrastive learning of representations via graph neural networks. arXiv preprint arXiv:2102.10056

  41. Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst 32(1):4–24

    Article  MathSciNet  Google Scholar 

  42. Wu Z, Xiong Y, Yu SX, Lin D (2018) Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3733–3742

  43. Xu K, Hu W, Leskovec J, Jegelka S (2018) How powerful are graph neural networks? arXiv preprint arXiv:1810.00826

  44. Xu M, Wang H, Ni B, Guo H, Tang J (2021) Self-supervised graph-level representation learning with local and global structure. arXiv preprint arXiv:2106.04113

  45. Yang Z, Cohen W, Salakhudinov R (2016) Revisiting semi-supervised learning with graph embeddings. In: International conference on machine learning, pp 40–48

  46. Ye F, Chen C, Zheng Z (2018) Deep autoencoder-like nonnegative matrix factorization for community detection. In: Proceedings of the 27th ACM international conference on information and knowledge management, pp 1393–1402

  47. You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. Adv Neural Inf Process Syst 33:5812–5823

    Google Scholar 

  48. Zeng H, Zhou H, Srivastava A, Kannan R, Prasanna V (2019) Graphsaint: graph sampling based inductive learning method. arXiv preprint arXiv:1907.04931

  49. Zhang M, Cui Z, Neumann M, Chen Y (2018) An end-to-end deep learning architecture for graph classification. In: Thirty-second AAAI conference on artificial intelligence

  50. Zhu Q, Du B, Yan P (2020) Self-supervised training of graph convolutional networks. arXiv preprint arXiv:2006.02380

  51. Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L (2020) Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131

  52. Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L (2021) Graph contrastive learning with adaptive augmentation. In: Proceedings of the web conference 2021, pp 2069–2080

  53. Zhu Y, Xu Y, Yu F, Wu S, Wang L (2020) Cagnn: Cluster-aware graph neural networks for unsupervised graph representation learning. arXiv preprint arXiv:2009.01674

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China Project No. 62177015, the Natural Science Foundation of Guangdong Province of China under Grant No. 2022A1515010148 and the Key-Area Research and Development Program of Guangdong Province Project No. 2019B111101001.

Author information

Authors and Affiliations

Authors

Contributions

Lei Chen: Conceptualization, Methodology, Validation, Writing - original draft, Writing - review & editing, Visualization. Jin Huang: Writing - original draft, Supervision, Data curation, Validation. Jingjing Li: Formal analysis, Resources, Writing - review & editing. Yang Cao: Project administration, Funding acquisition. Jing Xiao: Writing - review & editing, Validation.

Corresponding author

Correspondence to Jin Huang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Huang, J., Li, J. et al. Self-supervised graph representation learning using multi-scale subgraph views contrast. Neural Comput & Applic 34, 12559–12569 (2022). https://doi.org/10.1007/s00521-022-07299-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07299-x

Keywords

Navigation