Conclusion
Graph Foundation Models represent an evolving direction in graph machine learning. Drawing inspiration from the success of Large Language Models in NLP, GFMs are designed to be trained on extensive graph data and adapted for a diverse array of downstream tasks. In this article, we have explained and introduced the concept of GFMs, comparing them with Language Foundation Models to highlight their similarities and differences. We identified the key technologies in building GFMs as the pre-train and adaptation techniques from the fields of GNNs and LLMs. Additionally, we discussed the potential for GFMs to have significant applications in various domains, ranging from social network analysis to bioinformatics and beyond.
References
Perozzi B, Al-Rfou R, Skiena S S. DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2014
Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(1): 40–51
Bommasani R, Hudson D A, Adeli E, Altman R, Arora S, et al. On the opportunities and risks of foundation models. 2021, arXiv preprint arXiv: 2108.07258
Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi E H, Hashimoto T, Vinyals O, Liang P, Dean J, Fedus W. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022
Liu J, Yang C, Lu Z, Chen J, Li Y, Zhang M, Bai T, Fang Y, Sun L, Yu P S, Shi C. Towards graph foundation models: a survey and beyond. 2023, arXiv preprint arXiv: 2310.11829
Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L. Deep graph contrastive representation learning. 2020, arXiv preprint arXiv: 2006.04131
Hou Z, Liu X, Cen Y, Dong Y, Yang H, Wang C, Tang J. GraphMAE: self-supervised masked graph autoencoders. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022
Gui A, Ye J, Xiao H. G-adapter: towards structure-aware parameter-efficient transfer learning for graph transformer networks. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2023
Sun M, Zhou K, He X, Wang Y, Wang X. GPPT: graph pre-training and prompt tuning to generalize graph neural networks. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022
Wang X, Wang D, Chen L, Wang F, Lin Y. Building transportation foundation model via generative graph transformer. In: Proceedings of the 26th IEEE International Conference on Intelligent Transportation Systems (ITSC). 2023
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Shi, C., Chen, J., Liu, J. et al. Graph foundation model. Front. Comput. Sci. 18, 186355 (2024). https://doi.org/10.1007/s11704-024-40046-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-024-40046-0