Abstract
Graph neural networks (GNNs) especially Graph convolutional networks (GCNs) GCNs, are popular in graph representation learning. However, GCNs only consider the nearest neighbor, which makes it difficult to expand the node domain, and their performance decline as the number of layers increases due to the over-smoothing problem. Therefore, this paper proposes an Adaptive Randomized Graph Neural Network based on Markov Diffusion Kernel (ARM-net) to overcome these limitations. Firstly, ARM-net designs a random propagation strategy based on Bernoulli distribution; Secondly, an adaptive propagation process based on Markov diffusion kernels is designed to separate feature transformations from propagation, expand the node domain and reduce the risk of model’s over-smoothing; Finally, a graph regularization term is added to enable nodes to find more information useful for their classification results, thereby improve the generalization performance of the model. Experimental results show that the model outperforms several recently proposed semi-supervised classification algorithms in semi-supervised node classification tasks. ARM-net has better performance on multiple data sets. Through experiments, ARM-net solves the over-smoothing problem encountered during GNN propagation to some extent, and the model has better generalization performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864 (2016)
Gilmer, J., Schoenholz, S.S., Riley, P.F., et al.: Neural message passing for quantum chemistry. In: International Conference on Machine Learning, pp. 1263–1272. PMLR (2017)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
Elickovic, P.V., Cucurull, G., Casanova, A.: Graph attention networks. In: ICLR (2018)
Chen, D., Lin, Y., Li, W., et al: Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3438–3445 (2020)
Xu, K., Li, C., Tian, Y., Sonobe, T., et al.: Representation learning on graphs with jumping knowledge networks. In: International Conference on Machine Learning, pp. 5453–5462. PMLR (2018)
Rong, Y., Huang, W., Xu, T., Huang, J.: DropEdge: towards deep graph convolutional networks on node classification. In: ICLR (2019)
Wu, F., Souza, A., Zhang, T., et al: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)
Klicpera, J.; Bojchevski, A.; Gunnemann, S.: Predict then propagate: Graph neural networks meet personalized pagerank. In: ICLR (2019)
Klicpera, J.; Weienberger, S.; Günnemann, S.: Diffusion improves graph learning. In: Neural Information Processing Systems (2019)
Zhu, H., Koniusz, P.: Simple spectral graph convolution. In: ICLR (2020)
Ma, Q., Fan, Z., Wang, C., et al.: Graph mixed random network based on pagerank. Symmetry. 14(8), 1678 (2022)
Zhu, Y., Xu, Y., Yu, F., et al: Deep Graph Contrastive Representation Learning. arXiv preprint arXiv:2006.04131 (2020)
Cui, W., Bai, L., Yang, X., Liang, J.: A new contrastive learning framework for reducing the effect of hard negatives. Knowl.-Based Syst. 260, 110121 (2023)
Li, J., Zhou, P., Xiong, C., Hoi, S.C.: Prototypical Contrastive Learning of Unsupervised Representations. arXiv preprint arXiv:2005.04966 (2020)
Xie, Q., Dai, Z., Hovy, E., Luong, T., et al.: Unsupervised data augmentation for consistency training. In: Advances in Neural Information Processing Systems. vol. 33, pp. 6256–6268 (2020)
McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001)
Fouss, F., Francoisse, K., Yen, L., Pirotte, A., et al.: An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification. Neural Netw. 31, 53–72 (2012)
Berthelot, D., Carlini, N., Goodfellow, I., et al.: Mixmatch: A holistic approach to semi-supervised learning. In: NeurIPS (2019)
Yang, Z., Cohen, W., Salakhudinov, R.: Revisiting semi-supervised learning with graph embeddings. In: International Conference on Machine Learning, pp. 40–48. PMLR (2016)
Chien, E., Peng, J., Li, P., et al.: Adaptive Universal Generalized PageRank Graph Neural Network arXiv preprint arXiv:2006.07988 (2020)
Chen, M., Wei, Z., Huang, Z., Ding, B., et al.: Simple and deep graph convolutional networks. In: InInternational Conference on Machine Learning, pp. 1725–1735. PMLR (2020)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems. vol. 30 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ma, Q., Fan, Z., Wang, C., Qian, Y. (2023). Adaptive Randomized Graph Neural Network Based on Markov Diffusion Kernel. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14257. Springer, Cham. https://doi.org/10.1007/978-3-031-44216-2_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-44216-2_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44215-5
Online ISBN: 978-3-031-44216-2
eBook Packages: Computer ScienceComputer Science (R0)