ABSTRACT
To model graph-structured data, graph learning, in particular deep graph learning with graph neural networks, has drawn much attention in both academic and industrial communities lately. The effectiveness of prevailing graph learning methods usually rely on abundant labeled data for model training. However, it is common that graphs are scarcely labeled since data annotation and labeling on graphs is always time and resource-consuming. Therefore, it is imperative to investigate graph learning with minimal human supervision for the low-resource settings where limited or even no labeled data is available. In this tutorial, we will focus on the state-of-the-art techniques of Graph Minimally-Supervised Learning, in particular a series of weakly-supervised learning, few-shot learning, and self-supervised learning methods on graph-structured data as well as their real-world applications. The objectives of this tutorial are to: (1) formally categorize the problems in graph minimally-supervised learning and discuss the challenges under different learning scenarios; (2) comprehensively review the existing and recent advances of graph minimally-supervised learning; and (3) elucidate open questions and future research directions. This tutorial introduces major topics within minimally-supervised learning and offers a guide to a new frontier of graph learning.
- Ding, K., Li, J., Bhanushali, R., and Liu, H. Deep anomaly detection on attributed networks. In SDM (2019).Google ScholarCross Ref
- Ding, K., Wang, J., Caverlee, J., and Liu, H. Meta propagation networks for few-shot semi-supervised learning on graphs. In AAAI (2022).Google ScholarCross Ref
- Ding, K., Wang, J., Li, J., Caverlee, J., and Liu, H. Weakly-supervised graph meta-learning for few-shot node classification. arXiv preprint arXiv:2106.06873 (2021).Google Scholar
- Ding, K., Wang, J., Li, J., Shu, K., Liu, C., and Liu, H. Graph prototypical networks for few-shot learning on attributed networks. In CIKM (2020).Google ScholarDigital Library
- Ding, K., Zhou, Q., Tong, H., and Liu, H. Few-shot network anomaly detection via cross-network meta-learning. In TheWebConf (2021).Google ScholarDigital Library
- Guo, Z., Zhang, C., Yu, W., Herr, J., Wiest, O., Jiang, M., and Chawla, N. V. Few-shot graph learning for molecular property prediction. In TheWebConf (2021).Google ScholarDigital Library
- Hu, Z., Dong, Y., Wang, K., Chang, K.-W., and Sun, Y. Gpt-gnn: Generative pre-training of graph neural networks. In KDD (2020).Google ScholarDigital Library
- Kipf, T. N., and Welling, M. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).Google Scholar
- Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., and Tang, J. Self-supervised learning: Generative or contrastive. TKDE (2021).Google Scholar
- Liu, Z., Fang, Y., Liu, C., and Hoi, S. C. Relative and absolute location embedding for few-shot node classification on graph. In AAAI (2021).Google ScholarCross Ref
- Qiu, J., Chen, Q., Dong, Y., Zhang, J., Yang, H., Ding, M., Wang, K., and Tang, J. Gcc: Graph contrastive coding for graph neural network pre-training. In KDD (2020).Google ScholarDigital Library
- Rong, Y., Bian, Y., Xu, T., Xie, W., Wei, Y., Huang, W., and Huang, J. Self-supervised graph transformer on large-scale molecular data. In NeurIPS (2020).Google Scholar
- Shchur, O., Mumme, M., Bojchevski, A., and Günnemann, S. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018).Google Scholar
- Wang, N., Luo, M., Ding, K., Zhang, L., Li, J., and Zheng, Q. Graph few-shot learning with attribute matching. In CIKM (2020).Google ScholarDigital Library
- Wang, Y., Yao, Q., Kwok, J. T., and Ni, L. M. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR) (2020).Google Scholar
- Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y. A comprehensive survey on graph neural networks. TNNLS (2020).Google Scholar
- Yao, H., Zhang, C., Wei, Y., Jiang, M., Wang, S., Huang, J., Chawla, N., and Li, Z. Graph few-shot learning via knowledge transfer. In AAAI (2020).Google ScholarCross Ref
- You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., and Shen, Y. Graph contrastive learning with augmentations. In NeurIPS (2020).Google Scholar
- Zhang, C., Ding, K., Li, J., Zhang, X., Ye, Y., Chawla, N. V., and Liu, H. Few-shot learning on graphs. In IJCAI (2022).Google ScholarCross Ref
- Zhang, C., Yao, H., Huang, C., Jiang, M., Li, Z., and Chawla, N. V. Few-shot knowledge graph completion. In AAAI (2020).Google ScholarCross Ref
- Zhang, C., Yu, L., Saebi, M., Jiang, M., and Chawla, N. Few-shot multi-hop relation reasoning over knowledge bases. In EMNLP Findings (2020).Google ScholarCross Ref
- Zhang, Y., Qian, Y., Ye, Y., and Zhang, C. Adapting distilled knowledge for few-shot relation reasoning over knowledge graphs. In SDM (2022).Google ScholarCross Ref
- Zhou, Z.-H. A brief introduction to weakly supervised learning. National science review (2018).Google Scholar
Index Terms
- Toward Graph Minimally-Supervised Learning
Recommendations
Graph Minimally-supervised Learning
WSDM '22: Proceedings of the Fifteenth ACM International Conference on Web Search and Data MiningGraphs are widely used for abstracting complex systems of interacting objects, such as social networks, knowledge graphs, and traffic networks, as well as for modeling molecules, manifolds, and source code. To model such graph-structured data, graph ...
Robust Graph Meta-Learning for Weakly Supervised Few-Shot Node Classification
Graph machine learning (Graph ML) models typically require abundant labeled instances to provide sufficient supervision signals, which is commonly infeasible in real-world scenarios since labeled data for newly emerged concepts (e.g., new categorizations ...
Graph Self-supervised Learning with Augmentation-aware Contrastive Learning
WWW '23: Proceedings of the ACM Web Conference 2023Graph self-supervised learning aims to mine useful information from unlabeled graph data, and has been successfully applied to pre-train graph representations. Many existing approaches use contrastive learning to learn powerful embeddings by learning ...
Comments