Skip to main content

Adaptive Randomized Graph Neural Network Based on Markov Diffusion Kernel

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14257))

Included in the following conference series:

  • 1179 Accesses

Abstract

Graph neural networks (GNNs) especially Graph convolutional networks (GCNs) GCNs, are popular in graph representation learning. However, GCNs only consider the nearest neighbor, which makes it difficult to expand the node domain, and their performance decline as the number of layers increases due to the over-smoothing problem. Therefore, this paper proposes an Adaptive Randomized Graph Neural Network based on Markov Diffusion Kernel (ARM-net) to overcome these limitations. Firstly, ARM-net designs a random propagation strategy based on Bernoulli distribution; Secondly, an adaptive propagation process based on Markov diffusion kernels is designed to separate feature transformations from propagation, expand the node domain and reduce the risk of model’s over-smoothing; Finally, a graph regularization term is added to enable nodes to find more information useful for their classification results, thereby improve the generalization performance of the model. Experimental results show that the model outperforms several recently proposed semi-supervised classification algorithms in semi-supervised node classification tasks. ARM-net has better performance on multiple data sets. Through experiments, ARM-net solves the over-smoothing problem encountered during GNN propagation to some extent, and the model has better generalization performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864 (2016)

    Google Scholar 

  2. Gilmer, J., Schoenholz, S.S., Riley, P.F., et al.: Neural message passing for quantum chemistry. In: International Conference on Machine Learning, pp. 1263–1272. PMLR (2017)

    Google Scholar 

  3. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  4. Elickovic, P.V., Cucurull, G., Casanova, A.: Graph attention networks. In: ICLR (2018)

    Google Scholar 

  5. Chen, D., Lin, Y., Li, W., et al: Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3438–3445 (2020)

    Google Scholar 

  6. Xu, K., Li, C., Tian, Y., Sonobe, T., et al.: Representation learning on graphs with jumping knowledge networks. In: International Conference on Machine Learning, pp. 5453–5462. PMLR (2018)

    Google Scholar 

  7. Rong, Y., Huang, W., Xu, T., Huang, J.: DropEdge: towards deep graph convolutional networks on node classification. In: ICLR (2019)

    Google Scholar 

  8. Wu, F., Souza, A., Zhang, T., et al: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)

    Google Scholar 

  9. Klicpera, J.; Bojchevski, A.; Gunnemann, S.: Predict then propagate: Graph neural networks meet personalized pagerank. In: ICLR (2019)

    Google Scholar 

  10. Klicpera, J.; Weienberger, S.; Günnemann, S.: Diffusion improves graph learning. In: Neural Information Processing Systems (2019)

    Google Scholar 

  11. Zhu, H., Koniusz, P.: Simple spectral graph convolution. In: ICLR (2020)

    Google Scholar 

  12. Ma, Q., Fan, Z., Wang, C., et al.: Graph mixed random network based on pagerank. Symmetry. 14(8), 1678 (2022)

    Article  Google Scholar 

  13. Zhu, Y., Xu, Y., Yu, F., et al: Deep Graph Contrastive Representation Learning. arXiv preprint arXiv:2006.04131 (2020)

  14. Cui, W., Bai, L., Yang, X., Liang, J.: A new contrastive learning framework for reducing the effect of hard negatives. Knowl.-Based Syst. 260, 110121 (2023)

    Article  Google Scholar 

  15. Li, J., Zhou, P., Xiong, C., Hoi, S.C.: Prototypical Contrastive Learning of Unsupervised Representations. arXiv preprint arXiv:2005.04966 (2020)

  16. Xie, Q., Dai, Z., Hovy, E., Luong, T., et al.: Unsupervised data augmentation for consistency training. In: Advances in Neural Information Processing Systems. vol. 33, pp. 6256–6268 (2020)

    Google Scholar 

  17. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001)

    Article  Google Scholar 

  18. Fouss, F., Francoisse, K., Yen, L., Pirotte, A., et al.: An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification. Neural Netw. 31, 53–72 (2012)

    Article  MATH  Google Scholar 

  19. Berthelot, D., Carlini, N., Goodfellow, I., et al.: Mixmatch: A holistic approach to semi-supervised learning. In: NeurIPS (2019)

    Google Scholar 

  20. Yang, Z., Cohen, W., Salakhudinov, R.: Revisiting semi-supervised learning with graph embeddings. In: International Conference on Machine Learning, pp. 40–48. PMLR (2016)

    Google Scholar 

  21. Chien, E., Peng, J., Li, P., et al.: Adaptive Universal Generalized PageRank Graph Neural Network arXiv preprint arXiv:2006.07988 (2020)

  22. Chen, M., Wei, Z., Huang, Z., Ding, B., et al.: Simple and deep graph convolutional networks. In: InInternational Conference on Machine Learning, pp. 1725–1735. PMLR (2020)

    Google Scholar 

  23. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems. vol. 30 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qianli Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, Q., Fan, Z., Wang, C., Qian, Y. (2023). Adaptive Randomized Graph Neural Network Based on Markov Diffusion Kernel. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14257. Springer, Cham. https://doi.org/10.1007/978-3-031-44216-2_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44216-2_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44215-5

  • Online ISBN: 978-3-031-44216-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics