Abstract
In the broader machine learning literature, data-generation methods demonstrate promising results by generating additional informative training examples via augmenting sparse labels. Such methods are less studied in graphs due to the intricate dependencies among nodes in complex topology structures. This paper presents a novel node generation method that infuses a small set of high-quality synthesized nodes into the graph as additional labeled nodes to optimally expand the propagation of labeled information. By simply infusing additional nodes, the framework is orthogonal to the graph learning and downstream classification techniques, and thus is compatible with most popular graph pre-training (self-supervised learning), semi-supervised learning, and meta-learning methods. The contribution lies in designing the generated node set by solving a novel optimization problem. The optimization places the generated nodes in a manner that: (1) minimizes the classification loss to guarantee training accuracy and (2) maximizes label propagation to low-confidence nodes in the downstream task to ensure high-quality propagation. Theoretically, we show that the above dual optimization maximizes the global confidence of node classification. Our Experiments demonstrate statistically significant performance improvements over 14 baselines on 10 publicly available datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ding, K., Wang, J., Caverlee, J., Liu, H.: Meta propagation networks for graph few-shot semi-supervised learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 6, pp. 6524–6531 (2022)
Liu, Y., Ding, K., Wang, J., Lee, V., Liu, H., Pan, S.: Learning strong graph neural networks with weak information. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1559–1571 (2023)
Chen, Y., Coskunuzer, B., Gel, Y.: Topological relational learning on graphs. In: Advances in Neural Information Processing Systems, vol. 34, pp. 27 029–27 042 (2021)
Tan, Z., Ding, K., Guo, R., Liu, H.: Supervised graph contrastive learning for few-shot node classification. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 394–411. Springer (2022)
Yao, H., et al.: Graph few-shot learning via knowledge transfer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 6656–6663 (2020)
Giannone, G., Winther, O.: Scha-VAE: hierarchical context aggregation for few-shot generation. In: International Conference on Machine Learning, pp. 7550–7569. PMLR (2022)
Liu, Y., Li, M., Li, X., Giunchiglia, F., Feng, X., Guan, R.: Few-shot node classification on attributed networks with graph meta-learning. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 471–481 (2022)
Wang, S., Tan, Z., Liu, H., Li, J.: Contrastive meta-learning for few-shot node classification. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2386–2397 (2023)
Tan, Z., Wang, S., Ding, K., Li, J., Liu, H.: Transductive linear probing: a novel framework for few-shot node classification. In: Learning on Graphs Conference, pp. 4–1. PMLR (2022)
Duan, S., Li, W., Cai, J., He, Y., Wu, Y.: Query-variant advertisement text generation with association knowledge. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 412–421 (2021)
Liu, G., Zhao, T., Xu, J., Luo, T., Jiang, M.: Graph rationalization with environment-based augmentations. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1069–1078 (2022)
Sohn, H., Park, B.: Robust and informative text augmentation (RITA) via constrained worst-case transformations for low-resource named entity recognition. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1616–1624 (2022)
Yao, H., Zhang, L., Finn, C.: Meta-learning with fewer tasks through task interpolation. arXiv preprint arXiv:2106.02695 (2021)
Zhu, Y., Xu, Y., Yu, Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020)
Velickovic, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. ICLR (Poster) 2(3), 4 (2019)
Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)
Zhang, S., Tong, H., Xu, J., Maciejewski, R.: Graph convolutional networks: a comprehensive review. Comput. Soc. Networks 6(1), 1–23 (2019). https://doi.org/10.1186/s40649-019-0069-y
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016)
Hou, Z., et al.: Graphmae: self-supervised masked graph autoencoders. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 594–604 (2022)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of the Web Conference 2021, pp. 2069–2080 (2021)
You, Y., Chen, T., Shen, Y., Wang, Z.: Graph contrastive learning automated. In: International Conference on Machine Learning, pp. 12 121–12 132. PMLR (2021)
Kim, S., Lee, J., Lee, N., Kim, W., Choi, S., Park, C.: Task-equivariant graph few-shot learning. arXiv preprint arXiv:2305.18758 (2023)
Zhou, F., Cao, C., Zhang, K., Trajcevski, G., Zhong, T., Geng, J.: Meta-GNN: on few-shot node classification in graph meta-learning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2357–2360 (2019)
Wang, S., Ding, K., Zhang, C., Chen, C., Li, J.: Task-adaptive few-shot node classification. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1910–1919 (2022)
Ding, K., Xu, Z., Tong, H., Liu, H.: Data augmentation for deep graph learning: a survey. ACM SIGKDD Explorations Newsl 24(2), 61–77 (2022)
Ma, Y., Liu, X., Zhao, T., Liu, Y., Tang, J., Shah, N.: A unified view on graph neural networks as graph signal denoising. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 1202–1211 (2021)
Liu, H., Han, H., Jin, W., Liu, X., Liu, H.: Enhancing graph representations learning with decorrelated propagation. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1466–1476 (2023)
Liu, G., Zhao, T., Inae, E., Luo, T., Jiang, M.: Semi-supervised graph imbalanced regression. arXiv preprint arXiv:2305.12087 (2023)
Li, J., Zhou, P., Xiong, C., Hoi, S.C.: Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966 (2020)
Yang, Z., Cohen, W., Salakhudinov, R.: Revisiting semi-supervised learning with graph embeddings. In: International Conference on Machine Learning, pp. 40–48. PMLR (2016)
Hu, W., et al.: Open graph benchmark: Datasets for machine learning on graphs. In: Advances in Neural Information Processing Systems, vol. 33, pp. 22 118–22 133 (2020)
Hou, Z., et al.: Graphmae2: a decoding-enhanced masked self-supervised graph learner. In: Proceedings of the ACM Web Conference 2023 (WWW’23) (2023)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. In: ICML Workshop on Graph Representation Learning and Beyond 2020. http://arxiv.org/abs/2006.04131
Thakoor, S., et al.: Large-scale representation learning on graphs via bootstrapping. International Conference on Learning Representations (ICLR) (2022)
Zhang, H., Wu, Q., Yan, J., Wipf, D., Yu, P.S.: From canonical correlation analysis to self-supervised graph neural networks. Adv. Neural. Inf. Process. Syst. 34, 76–89 (2021)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., Weinberger, K.: Simplifying graph convolutional networks. In: International Conference on Machine Learning, pp. 6861–6871. PMLR (2019)
Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.: Graphsaint: graph sampling based inductive learning method. arXiv preprint arXiv:1907.04931 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cui, H., Abdelzaher, T. (2025). Node Generation for Node Classification in Sparsely-Labeled Graphs. In: Aiello, L.M., Chakraborty, T., Gaito, S. (eds) Social Networks Analysis and Mining. ASONAM 2024. Lecture Notes in Computer Science, vol 15211. Springer, Cham. https://doi.org/10.1007/978-3-031-78541-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-78541-2_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-78540-5
Online ISBN: 978-3-031-78541-2
eBook Packages: Computer ScienceComputer Science (R0)