skip to main content
10.1145/3488932.3497753acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation

Published: 30 May 2022 Publication History

Abstract

Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the models trained over the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, for the first time, we comprehensively investigate and develop model extraction attacks against GNN models. We first systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbour connections of the nodes obtained by the attacker. Then we present detailed methods which utilise the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., 84% - 89% of the inputs in the target domain have the same output predictions as the victim model.

References

[1]
Deepak Bhaskar Acharya and Huaming Zhang. [n.,d.]. Feature Selection and Extraction for Graph Neural Networks. In Proc. ACM SE '2020.
[2]
Wu Bang, Yang Xiangwen, Pan Shirui, and Yuan Xingliang. 2021. Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE.
[3]
Salvatore Catanese, Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara, and Alessandro Provetti. [n.,d.]. Crawling Facebook for social network analysis purposes. In Proc. WIMS 2011.
[4]
Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. 2018. Exploring connections between active learning and model extraction. arXiv preprint arXiv:1811.02054 (2018).
[5]
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. 2020. A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models. In Proc. AAAI.
[6]
Duen Horng Chau, Shashank Pandit, Samuel Wang, and Christos Faloutsos. [n.,d.]. Parallel crawling for online social networks. In Proc. WWW 2007.
[7]
Jinyin Chen, Xiang Lin, Ziqiang Shi, and Yi Liu. 2020. Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Trans. Comput. Soc. Syst., Vol. 7 (2020).
[8]
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. [n.,d.]. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In Proc. KDD 2019.
[9]
Aaron Clauset, Mark EJ Newman, and Cristopher Moore. 2004. Finding community structure in very large networks. Physical review E (2004).
[10]
Suguo Du, Xiaolong Li, Jinli Zhong, Lu Zhou, Minhui Xue, Haojin Zhu, and Limin Sun. 2018. Modeling Privacy Leakage Risks in Large-Scale Social Networks. IEEE Access, Vol. 6 (2018), 17653--17665.
[11]
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. 2020. Quantifying Privacy Leakage in Graph Embedding. (2020). arxiv: 2010.00906
[12]
Luca Franceschi, Mathias Niepert, Massimiliano Pontil, and Xiao He. [n.,d.]. Learning Discrete Structures for Graph Neural Networks. In Proc. ICML 2019 (Proceedings of Machine Learning Research).
[13]
Chengsi Gao, Bing Li, Ying Wang, Weiwei Chen, and Lei Zhang. [n.,d.] a. Tenet: A Neural Network Model Extraction Attack in Multi-core Architecture. In Proc. GLSVLSI '21: Great Lakes Symposium on VLSI 2021. ACM, 21--26.
[14]
Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. [n.,d.] b. Large-Scale Learnable Graph Convolutional Networks. In Proc. KDD 2018.
[15]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. [n.,d.]. Neural Message Passing for Quantum Chemistry. In Proc. ICML 2017.
[16]
Neil Zhenqiang Gong and Bin Liu. [n.,d.]. You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors. In Proc. USENIX Security 16.
[17]
Neil Zhenqiang Gong and Bin Liu. 2018. Attribute Inference Attacks in Online Social Networks. ACM Trans. Priv. Secur., Vol. 21 (2018).
[18]
Xueluan Gong, Qian Wang, Yanjiao Chen, Wang Yang, and Xinchang Jiang. 2020. Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models. IEEE Commun. Mag., Vol. 58, 12 (2020), 83--89.
[19]
Payas Gupta, Swapna Gottipati, Jing Jiang, and Debin Gao. [n.,d.]. Your love is public now: questioning the use of personal information in authentication. In Proc. ASIA CCS '13.
[20]
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2020. Stealing Links from Graph Neural Networks. (2020). arxiv: 2005.02131
[21]
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. [n.,d.]. High Accuracy and High Fidelity Extraction of Neural Networks. In Proc. {USENIX} Security 2020.
[22]
Jinyuan Jia and Neil Zhenqiang Gong. [n.,d.]. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In Proc. USENIX Security 2018.
[23]
Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. [n.,d.]. PRADA: Protecting Against DNN Model Stealing Attacks. In Proc. Euro S&P 2019.
[24]
Manish Kesarwani, Bhaskar Mukhoty, Vijay Arya, and Sameep Mehta. [n.,d.]. Model Extraction Warning in MLaaS Paradigm. In Proc. ACSAC 2018.
[25]
Thomas N. Kipf and Max Welling. [n.,d.]. Semi-Supervised Classification with Graph Convolutional Networks. In Proc. ICLR 2017.
[26]
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gü nnemann. [n.,d.]. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In Proc. ICLR 2019.
[27]
Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su. [n.,d.]. Defending Against Neural Network Model Stealing Attacks Using Deceptive Perturbations. In 2019 IEEE Security and Privacy Workshops.
[28]
Jia Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, and Junzhou Huang. [n.,d.]. Adversarial Attack on Community Detection by Hiding Individuals. In Proc. WWW 2020.
[29]
Shaofeng Li, Shiqing Ma, Minhui Xue, and Benjamin Zi Hao Zhao. 2020. Deep Learning Backdoors. CoRR, Vol. abs/2007.08273 (2020).
[30]
Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter W. Battaglia. 2018. Learning Deep Generative Models of Graphs. CoRR, Vol. abs/1803.03324 (2018).
[31]
Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi S. Jaakkola, Geoffrey J. Gordon, Stefanie Jegelka, and Ruslan Salakhutdinov. 2020. Graph Adversarial Networks: Protecting Information against Adversarial Attacks. CoRR, Vol. abs/2009.13504 (2020).
[32]
Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards More Practical Adversarial Attacks on Graph Neural Networks. In Proc. NeurIPS.
[33]
Smitha Milli, Ludwig Schmidt, Anca D. Dragan, and Moritz Hardt. 2019. Model Reconstruction from Model Explanations. In Proc. the Conference on Fairness, Accountability, and Transparency.
[34]
Xichuan Niu, Bofang Li, Chenliang Li, Rong Xiao, Haochuan Sun, Hongbo Deng, and Zhenzhong Chen. 2020. A Dual Heterogeneous Graph Attention Network to Improve Long-Tail Performance for Shop Search in E-Commerce. In KDD. ACM, 3405--3415.
[35]
Seong Joon Oh, Bernt Schiele, and Mario Fritz. 2019. Towards Reverse-Engineering Black-Box Neural Networks. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning.
[36]
Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. [n.,d.]. Knockoff Nets: Stealing Functionality of Black-Box Models. In Proc. CVPR 2019.
[37]
Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish K. Shevade, and Vinod Ganapathy. 2019. A framework for the extraction of Deep Neural Networks by leveraging public data. CoRR, Vol. abs/1905.09165 (2019).
[38]
Robert Nikolai Reith, Thomas Schneider, and Oleksandr Tkachenko. [n.,d.]. Efficiently Stealing your Machine Learning Models. In Proc. the 18th ACM Workshop on Privacy in the Electronic Society, WPES@CCS 2019. ACM, 198--210.
[39]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. [n.,d.]. Membership Inference Attacks Against Machine Learning Models. In Proc. SP 2017.
[40]
Julien Simon. 2019. Now Available on Amazon SageMaker: The Deep Graph Library. https://aws.amazon.com/blogs/aws/now-available-on-amazon-sagemaker-the-deep-graph-library/
[41]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. [n.,d.]. Stealing Machine Learning Models via Prediction APIs. In Proc. USENIX Security 16.
[42]
B. S. Vidyalakshmi, Raymond K. Wong, and Chi-Hung Chi. [n.,d.]. User Attribute Inference in Directed Social Networks as a Service. In Proc. SCC 2016.
[43]
Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, and Chen Gong. 2021. Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels. In Thirty-Fifth Conference on Neural Information Processing Systems.
[44]
Binghui Wang and Neil Zhenqiang Gong. [n.,d.]. Attacking Graph-based Classification via Manipulating the Graph Structure. In Proc. the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019. ACM, 2023--2040.
[45]
Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Cai Fu, Hai Li, and Yiran Chen. 2020. Evasion Attacks to Graph Neural Networks via Influence Function. CoRR, Vol. abs/2009.00203 (2020).
[46]
Xiao Wang, Di Jin, Xiaochun Cao, Liang Yang, and Weixiong Zhang. 2016. Semantic Community Identification in Large Attribute Networks. In Proc. AAAI.
[47]
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. [n.,d.]. Adversarial Examples for Graph Data: Deep Insights into Attack and Defense. In Proc. IJCAI 2019.
[48]
Jing Xu, Minhui Xue, and Stjepan Picek. [n.,d.]. Explainability-based Backdoor Attacks Against Graph Neural Networks. In Proc. WiseML@WiSec 2021: Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, Christina Pöpper and Mathy Vanhoef (Eds.). ACM, 31--36.
[49]
Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. 2018. GraphRNN: A Deep Generative Model for Graphs. CoRR, Vol. abs/1802.08773 (2018).
[50]
He Zhang, Bang Wu, Xiangwen Yang, Chuan Zhou, Shuo Wang, Xingliang Yuan, and Shirui Pan. 2021 b. Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks. In CIKM. ACM, 3617--3621.
[51]
Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. [n.,d.]. Data Poisoning Attack against Knowledge Graph Embedding. In Proc. IJCAI 2019.
[52]
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021 a. Backdoor Attacks to Graph Neural Networks. In Proc. SACMAT '21: The 26th ACM Symposium on Access Control Models and Technologies, 2021. ACM, 15--26.
[53]
Shichao Zhu, Shirui Pan, Chuan Zhou, Jia Wu, Yanan Cao, and Bin Wang. 2020. Graph Geometry Interaction Learning. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020).
[54]
Marinka Zitnik, Jure Leskovec, et al. 2018. Prioritizing network communities. Nature communications (2018).
[55]
Daniel Zügner, Amir Akbarnejad, and Stephan Gü nnemann. [n.,d.]. Adversarial Attacks on Neural Networks for Graph Data. In Proc. IJCAI 2019.

Cited By

View all
  • (2025)Updates Leakage Attack Against Private Graph Split LearningAlgorithms and Architectures for Parallel Processing10.1007/978-981-96-1528-5_1(1-21)Online publication date: 15-Feb-2025
  • (2024)Unveiling the secrets without dataProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3699194(5251-5268)Online publication date: 14-Aug-2024
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • Show More Cited By

Index Terms

  1. Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security
      May 2022
      1291 pages
      ISBN:9781450391405
      DOI:10.1145/3488932
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 May 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. graph neural networks
      2. model extraction attack

      Qualifiers

      • Research-article

      Funding Sources

      • Australian Research Council

      Conference

      ASIA CCS '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 418 of 2,322 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)196
      • Downloads (Last 6 weeks)15
      Reflects downloads up to 03 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Updates Leakage Attack Against Private Graph Split LearningAlgorithms and Architectures for Parallel Processing10.1007/978-981-96-1528-5_1(1-21)Online publication date: 15-Feb-2025
      • (2024)Unveiling the secrets without dataProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3699194(5251-5268)Online publication date: 14-Aug-2024
      • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
      • (2024)GNNFingers: A Fingerprinting Framework for Verifying Ownerships of Graph Neural NetworksProceedings of the ACM Web Conference 202410.1145/3589334.3645489(652-663)Online publication date: 13-May-2024
      • (2024)An Empirical Evaluation of the Data Leakage in Federated Graph LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2023.332635911:2(1605-1618)Online publication date: Mar-2024
      • (2024)A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and ApplicationsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.345432836:12(7497-7515)Online publication date: Dec-2024
      • (2024)Revisiting Black-box Ownership Verification for Graph Neural Networks2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00232(2478-2496)Online publication date: 19-May-2024
      • (2024)Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00110(2534-2552)Online publication date: 19-May-2024
      • (2024)GrOVe: Ownership Verification of Graph Neural Networks using Embeddings2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00050(2460-2477)Online publication date: 19-May-2024
      • (2024)Trustworthy Graph Neural Networks: Aspects, Methods, and TrendsProceedings of the IEEE10.1109/JPROC.2024.3369017112:2(97-139)Online publication date: Feb-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media