skip to main content
10.1145/3534678.3539336acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Streaming Graph Neural Networks with Generative Replay

Published: 14 August 2022 Publication History

Abstract

Training Graph Neural Networks (GNNs) incrementally is a particularly urgent problem, because real-world graph data usually arrives in a streaming fashion, and inefficiently updating of the models results in out-of-date embeddings, thus degrade its performance in downstream tasks. Traditional incremental learning methods will gradually forget old knowledge when learning new patterns, which is the catastrophic forgetting problem. Although saving and revisiting historical graph data alleviates the problem, the storage limitation in real-world applications reduces the amount of saved data, causing GNN to forget other knowledge. In this paper, we propose a streaming GNN based on generative replay, which can incrementally learn new patterns while maintaining existing knowledge without accessing historical data. Specifically, our model consists of the main model (GNN) and an auxiliary generative model. The generative model based on random walks with restart can learn and generate fake historical samples (i.e., nodes and their neighborhoods), which can be trained with real data to avoid the forgetting problem. Besides, we also design an incremental update algorithm for the generative model to maintain the graph distribution and for GNN to capture the current patterns. Our model is evaluated on different streaming data sets. The node classification results prove that our model can update the model efficiently and achieve comparable performance to model retraining. Code is available at https://github.com/Junshan-Wang/SGNN-GR.

Supplemental Material

MP4 File
Presentation video for "Streaming Graph Neural Networks via Generative Replay"

References

[1]
Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. 2018. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV). 139--154.
[2]
Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214--223.
[3]
Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, and Stephan Günnemann. 2018. Netgan: Generating graphs via random walks. In International Conference on Machine Learning. PMLR, 610--619.
[4]
Xu Chen, JunshanWang, and Kunqing Xie. 2021. TrafficStream: A Streaming Traffic Flow Forecasting Framework Based on Graph Neural Networks and Continual Learning. In IJCAI.
[5]
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014).
[6]
Nicola De Cao and Thomas Kipf. 2018. MolGAN: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973 (2018).
[7]
Lun Du, YunWang, Guojie Song, Zhicong Lu, and JunshanWang. 2018. Dynamic Network Embedding: An Extended Approach for Skip-gram based Network Embedding. In IJCAI. 2086--2092.
[8]
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, DavidWarde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014).
[9]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1025--1035.
[10]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[11]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[12]
Thomas N Kipf and MaxWelling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[13]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 13 (2017), 3521--3526.
[14]
Xiaoyu Kou, Yankai Lin, Shaobo Liu, Peng Li, Jie Zhou, and Yan Zhang. 2020. Disentangle-based Continual Graph Representation Learning. arXiv preprint arXiv:2010.02565 (2020).
[15]
Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, and Huan Liu. 2017. Attributed network embedding for learning in a dynamic environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, 387--396.
[16]
Huihui Liu, Yiding Yang, and Xinchao Wang. 2021. Overcoming Catastrophic Forgetting in Graph Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 8653--8661.
[17]
Xi Liu, Ping-Chun Hsieh, Nick Duffield, Rui Chen, Muhe Xie, and Xidao Wen. 2018. Streaming Network Embedding through Local Actions. arXiv preprint arXiv:1811.05932 (2018).
[18]
David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems. 6467--6476.
[19]
Yao Ma, Ziyi Guo, Zhaochun Ren, Eric Zhao, Jiliang Tang, and Dawei Yin. 2018. Streaming Graph Neural Networks. arXiv preprint arXiv:1810.10627 (2018).
[20]
Fei Mi, Liangwei Chen, Mengjie Zhao, Minlie Huang, and Boi Faltings. 2020. Continual Learning for Natural Language Generation in Task-oriented Dialog Systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. 3461--3474.
[21]
James R Norris and James Robert Norris. 1998. Markov chains. Number 2. Cambridge university press.
[22]
Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, and Charles E Leisersen. 2019. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. arXiv preprint arXiv:1902.10191 (2019).
[23]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 701--710.
[24]
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2001-- 2010.
[25]
David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. 2019. Experience replay for continual learning. In Advances in Neural Information Processing Systems. 348--358.
[26]
Sheldon M Ross. 2014. Introduction to probability models. Academic press.
[27]
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016).
[28]
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems. 2990--2999.
[29]
Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1067--1077.
[30]
Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In Sixth international conference on data mining (ICDM'06). IEEE, 613--622.
[31]
Guillem Velickovic, Petar any Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[32]
Chen Wang, Yuheng Qiu, and Sebastian Scherer. 2020. Lifelong Graph Learning. arXiv preprint arXiv:2009.00647 (2020).
[33]
JunshanWang, Yilun Jin, Guojie Song, and Xiaojun Ma. 2020. EPNE: Evolutionary Pattern Preserving Network Embedding. In ECAI.
[34]
JunshanWang, Guojie Song, YiWu, and LiangWang. 2020. Streaming Graph Neural Networks via Continual Learning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 1515--1524.
[35]
Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020. Microsoft academic graph: When experts are not enough. Quantitative Science Studies 1, 1 (2020), 396--413.
[36]
Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. 2019. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. arXiv preprint arXiv:1906.00121 (2019).
[37]
Liang Xiang, Quan Yuan, Shiwan Zhao, Li Chen, Xiatian Zhang, Qing Yang, and Jimeng Sun. 2010. Temporal recommendation on graphs via long-and short-term preference fusion. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. 723--732.
[38]
Keyulu Xu,Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[39]
Sijie Yan, Yuanjun Xiong, and Dahua Lin. 2018. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-second AAAI conference on artificial intelligence.
[40]
Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, and Jure Leskovec. 2018. Graphrnn: Generating realistic graphs with deep auto-regressive models. In International Conference on Machine Learning. PMLR, 5708--5717.
[41]
Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 3987--3995.
[42]
Fan Zhou and Chengtai Cao. 2021. Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4714--4722.
[43]
Yuan Zuo, Guannan Liu, Hao Lin, Jia Guo, Xiaoqian Hu, and Junjie Wu. 2018. Embedding Temporal Network via Neighborhood Formation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2857--2866.

Cited By

View all
  • (2025)Continual learning with high-order experience replay for dynamic network embeddingPattern Recognition10.1016/j.patcog.2024.111093159(111093)Online publication date: Mar-2025
  • (2025)Continuous-time dynamic graph learning based on spatio-temporal random walksThe Journal of Supercomputing10.1007/s11227-024-06881-581:2Online publication date: 11-Jan-2025
  • (2024)Continual Learning for Graph Recommender System Through Extracting Collaborative SignalThe Journal of Korean Institute of Information Technology10.14801/jkiit.2024.22.10.12922:10(129-139)Online publication date: 31-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
August 2022
5033 pages
ISBN:9781450393850
DOI:10.1145/3534678
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. continual learning
  2. graph neural networks
  3. streaming networks

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)285
  • Downloads (Last 6 weeks)21
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Continual learning with high-order experience replay for dynamic network embeddingPattern Recognition10.1016/j.patcog.2024.111093159(111093)Online publication date: Mar-2025
  • (2025)Continuous-time dynamic graph learning based on spatio-temporal random walksThe Journal of Supercomputing10.1007/s11227-024-06881-581:2Online publication date: 11-Jan-2025
  • (2024)Continual Learning for Graph Recommender System Through Extracting Collaborative SignalThe Journal of Korean Institute of Information Technology10.14801/jkiit.2024.22.10.12922:10(129-139)Online publication date: 31-Oct-2024
  • (2024)TCGC: Temporal Collaboration-Aware Graph Co-Evolution Learning for Dynamic RecommendationACM Transactions on Information Systems10.1145/368747043:1(1-27)Online publication date: 26-Nov-2024
  • (2024)FTF-ER: Feature-Topology Fusion-Based Experience Replay Method for Continual Graph LearningProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681457(8336-8344)Online publication date: 28-Oct-2024
  • (2024)Topology-aware Embedding Memory for Continual Learning on Expanding NetworksProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671732(4326-4337)Online publication date: 25-Aug-2024
  • (2024)GPT4Rec: Graph Prompt Tuning for Streaming RecommendationProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657720(1774-1784)Online publication date: 10-Jul-2024
  • (2024)DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual LearningProceedings of the ACM Web Conference 202410.1145/3589334.3645561(733-744)Online publication date: 13-May-2024
  • (2024)Continual Learning for Smart City: A SurveyIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.344712336:12(7805-7824)Online publication date: Dec-2024
  • (2024)Neighborhood Sampling with Incremental Learning for Dynamic Network Embedding2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE)10.1109/CISCE62493.2024.10653376(1412-1416)Online publication date: 10-May-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media