skip to main content
10.1145/3589335.3641246acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
tutorial

Lecture-style Tutorial: Towards Graph Foundation Models

Published: 13 May 2024 Publication History

Abstract

Emerging as fundamental building blocks for diverse artificial intelligence applications, foundation models have achieved notable success across natural language processing and many other domains. Concurrently, graph machine learning has gradually evolved from shallow methods to deep models to leverage the abundant graph-structured data that constitute an important pillar in the data ecosystem for artificial intelligence. Naturally, the emergence and homogenization capabilities of foundation models have piqued the interest of graph machine learning researchers. This has sparked discussions about developing a next-generation graph learning paradigm, one that is pre-trained on broad graph data and can be adapted to a wide range of downstream graph-based tasks. However, there is currently no clear definition or systematic analysis for this type of work.
In this tutorial, we will introduce the concept of graph foundation models (GFMs), and provide a comprehensive exposition on their key characteristics and underpinning technologies. Subsequently, we will thoroughly review existing works that lay the groundwork towards GFMs, which are summarized into three primary categories based on their roots in graph neural networks, large language models, or a hybrid of both. Beyond providing a comprehensive overview and in-depth analysis of the current landscape and progress towards graph foundation models, this tutorial will also explore potential avenues for future research in this important and dynamic field. Finally, to help the audience gain a systematic understanding of the topics covered in this tutorial, we present further details in our recent preprint paper, "Towards Graph Foundation Models: A Survey and Beyond"[4], available at https://arxiv.org/pdf/2310.11829.pdf.

Supplemental Material

MP4 File
Presentation video - Tutorial
MP4 File
Supplemental video

References

[1]
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
[2]
Jiayan Guo, Lun Du, and Hengyu Liu. 2023. GPT4Graph: Can Large Language Models Understand Graph Structured Data? An Empirical Evaluation and Benchmarking. arXiv preprint arXiv:2305.15066 (2023).
[3]
Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. 2022. A survey of transformers. AI Open (2022).
[4]
Jiawei Liu, Cheng Yang, Zhiyuan Lu, Junze Chen, Yibo Li, Mengmei Zhang, Ting Bai, Yuan Fang, Lichao Sun, Philip S Yu, et al. 2023. Towards Graph Foundation Models: A Survey and Beyond. arXiv preprint arXiv:2310.11829 (2023).
[5]
Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and S Yu Philip. 2022. Graph self-supervised learning: A survey. IEEE Transactions on Knowledge and Data Engineering, Vol. 35, 6 (2022), 5879--5900.
[6]
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, Vol. 63, 10 (2020), 1872--1897.
[7]
Chuan Shi, Yitong Li, Jiawei Zhang, Yizhou Sun, and S Yu Philip. 2016. A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering, Vol. 29, 1 (2016), 17--37.
[8]
Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. 2023. Can Language Models Solve Graph Problems in Natural Language? arXiv preprint arXiv:2305.10037 (2023).
[9]
Yang Yuan. 2023. On the power of foundation models. In International Conference on Machine Learning. PMLR, 40519--40530.
[10]
Ziwei Zhang, Haoyang Li, Zeyang Zhang, Yijian Qin, Xin Wang, and Wenwu Zhu. 2023. Large Graph Models: A Perspective. arXiv preprint arXiv:2308.14522 (2023). io

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '24: Companion Proceedings of the ACM Web Conference 2024
May 2024
1928 pages
ISBN:9798400701726
DOI:10.1145/3589335
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep learning
  2. graph foundation models
  3. graph neural networks
  4. large language models

Qualifiers

  • Tutorial

Conference

WWW '24
Sponsor:
WWW '24: The ACM Web Conference 2024
May 13 - 17, 2024
Singapore, Singapore

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 249
    Total Downloads
  • Downloads (Last 12 months)249
  • Downloads (Last 6 weeks)9
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media