Skip to main content

Advertisement

Log in

Graph foundation model

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Conclusion

Graph Foundation Models represent an evolving direction in graph machine learning. Drawing inspiration from the success of Large Language Models in NLP, GFMs are designed to be trained on extensive graph data and adapted for a diverse array of downstream tasks. In this article, we have explained and introduced the concept of GFMs, comparing them with Language Foundation Models to highlight their similarities and differences. We identified the key technologies in building GFMs as the pre-train and adaptation techniques from the fields of GNNs and LLMs. Additionally, we discussed the potential for GFMs to have significant applications in various domains, ranging from social network analysis to bioinformatics and beyond.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Perozzi B, Al-Rfou R, Skiena S S. DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2014

  2. Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(1): 40–51

    Article  Google Scholar 

  3. Bommasani R, Hudson D A, Adeli E, Altman R, Arora S, et al. On the opportunities and risks of foundation models. 2021, arXiv preprint arXiv: 2108.07258

  4. Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi E H, Hashimoto T, Vinyals O, Liang P, Dean J, Fedus W. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022

  5. Liu J, Yang C, Lu Z, Chen J, Li Y, Zhang M, Bai T, Fang Y, Sun L, Yu P S, Shi C. Towards graph foundation models: a survey and beyond. 2023, arXiv preprint arXiv: 2310.11829

  6. Zhu Y, Xu Y, Yu F, Liu Q, Wu S, Wang L. Deep graph contrastive representation learning. 2020, arXiv preprint arXiv: 2006.04131

  7. Hou Z, Liu X, Cen Y, Dong Y, Yang H, Wang C, Tang J. GraphMAE: self-supervised masked graph autoencoders. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022

  8. Gui A, Ye J, Xiao H. G-adapter: towards structure-aware parameter-efficient transfer learning for graph transformer networks. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2023

  9. Sun M, Zhou K, He X, Wang Y, Wang X. GPPT: graph pre-training and prompt tuning to generalize graph neural networks. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022

  10. Wang X, Wang D, Chen L, Wang F, Lin Y. Building transportation foundation model via generative graph transformer. In: Proceedings of the 26th IEEE International Conference on Intelligent Transportation Systems (ITSC). 2023

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuan Shi.

Ethics declarations

Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, C., Chen, J., Liu, J. et al. Graph foundation model. Front. Comput. Sci. 18, 186355 (2024). https://doi.org/10.1007/s11704-024-40046-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-024-40046-0