skip to main content
10.1145/3576915.3623173acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Devil in Disguise: Breaching Graph Neural Networks Privacy through Infiltration

Published: 21 November 2023 Publication History

Abstract

Graph neural networks (GNNs) have been developed to mine useful information from graph data of various applications, e.g., healthcare, fraud detection, and social recommendation. However, GNNs open up new attack surfaces for privacy attacks on graph data. In this paper, we propose Infiltrator, a privacy attack that is able to pry node-level private information based on black-box access to GNNs. Different from existing works that require prior information of the victim node, we explore the possibility of conducting the attack without any information of the victim node. Our idea is to infiltrate the graph with attacker-created nodes to befriend the victim node. More specifically, we design infiltration schemes that enable the adversary to infer the label, neighboring links, and sensitive attributes of a victim node. We evaluate Infiltrator with extensive experiments on three representative GNN models and six real-world datasets. The results demonstrate that Infiltrator can achieve an attack performance of more than 98% in all three attacks, outperforming baseline approaches. We further evaluate the defense resistance of Infiltrator against the graph homophily defender and the differentially private model.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In ACM SIGSAC conference on computer and communications security. 308--318.
[2]
Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, and Stephan Günnemann. 2023. Training differentially private graph neural networks with random walk sampling. arXiv preprint arXiv:2301.00738 (2023).
[3]
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. 2022. Membership inference attacks from first principles. In IEEE Symposium on Security and Privacy. 1897--1914.
[4]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In ACM Workshop on Artificial Intelligence and Security. 15--26.
[5]
Weijian Chen, Yulong Gu, Zhaochun Ren, Xiangnan He, Hongtao Xie, Tong Guo, Dawei Yin, and Yongdong Zhang. 2019. Semi-supervised user profiling with heterogeneous graph attention networks. In International Joint Conference on Artificial Intelligence. ijcai.org, 2116--2122.
[6]
Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, and James Cheng. 2022. Understanding and improving graph injection attack by promoting unnoticeability. In International Conference on Learning Representations. OpenReview.net.
[7]
Mauro Conti, Jiaxin Li, Stjepan Picek, and Jing Xu. 2022. Label-only membership inference attack against node-Level graph neural networks. In ACM Workshop on Artificial Intelligence and Security. 1--12.
[8]
Ameya Daigavane, Gagan Madan, Aditya Sinha, Abhradeep Guha Thakurta, Gaurav Aggarwal, and Prateek Jain. 2021. Node-level differentially private graph neural networks. arXiv preprint arXiv:2111.15521 (2021).
[9]
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. 2020. Quantifying privacy leakage in graph embedding. In EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. ACM, 76--85.
[10]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The World Wide Web Conference. ACM, 417--426.
[11]
Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. 2019. Graph adversarial training: Dynamically regularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering, Vol. 33, 6 (2019), 2493--2504.
[12]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 1024--1034.
[13]
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2021a. Stealing links from graph neural networks. In USENIX Security Symposium. 2669--2686.
[14]
Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, and Yang Zhang. 2021b. Node-level membership inference attacks against graph neural networks. arXiv preprint arXiv:2102.05429 (2021).
[15]
Chao Huang, Huance Xu, Yong Xu, Peng Dai, Lianghao Xia, Mengyin Lu, Liefeng Bo, Hao Xing, Xiaoping Lai, and Yanfang Ye. 2021. Knowledge-aware coupled graph neural network for social recommendation. In AAAI Conference on Artificial Intelligence. AAAI Press, 4115--4122.
[16]
Bargav Jayaraman and David Evans. 2019. Evaluating differentially private machine learning in practice. In USENIX Security Symposium. 1895--1912.
[17]
Bargav Jayaraman and David Evans. 2022. Are attribute inference attacks just imputation?. In ACM SIGSAC Conference on Computer and Communications Security. 1569--1582.
[18]
Wei Jin, Yaxing Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, and Jiliang Tang. 2021. Adversarial attacks and defenses on graphs. ACM SIGKDD Explorations Newsletter, Vol. 22, 2 (2021), 19--34.
[19]
Vishesh Karwa, Sofya Raskhodnikova, Adam Smith, and Grigory Yaroslavtsev. 2011. Private analysis of graph structure. Proceedings of the VLDB Endowment, Vol. 4, 11 (2011), 1146--1157.
[20]
Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2013. Analyzing graphs with node differential privacy. In Theory of Cryptography Conference. Springer, 457--476.
[21]
Nicolas Keriven and Gabriel Peyré. 2019. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 7090--7099.
[22]
Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations. OpenReview.net.
[23]
Aashish Kolluri, Teodora Baluta, Bryan Hooi, and Prateek Saxena. 2022. LPGNet: Link private graph networks for node classification. In ACM SIGSAC Conference on Computer and Communications Security. 1813--1827.
[24]
Zhiwei Liu, Yingtong Dou, Philip S Yu, Yutong Deng, and Hao Peng. 2020. Alleviating the inconsistency problem of applying graph neural network to fraud detection. In International ACM SIGIR Conference on Research and Development in Information Retrieval. 1569--1572.
[25]
Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology (2001), 415--444.
[26]
Shagufta Mehnaz, Sayanton V Dibbo, Roberta De Viti, Ehsanul Kabir, Björn B Brandenburg, Stefan Mangard, Ninghui Li, Elisa Bertino, Michael Backes, Emiliano De Cristofaro, et al. 2022. Are your sensitive attributes private? Novel model inversion attribute inference attacks on classification models. In USENIX Security Symposium. 4579--4596.
[27]
Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2020. DropEdge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations. OpenReview.net.
[28]
Benedek Rozemberczki, Carl Allen, and Rik Sarkar. 2019. Multi-scale attributed node embedding. arXiv preprint arXiv:1909.13021 (2019).
[29]
Benedek Rozemberczki and Rik Sarkar. 2020. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In ACM International Conference on Information and Knowledge Management. 1325--1334.
[30]
Sina Sajadmanesh and Daniel Gatica-Perez. 2021. Locally private graph neural networks. In ACM SIGSAC Conference on Computer and Communications Security. 2130--2145.
[31]
Sina Sajadmanesh, Ali Shahin Shamsabadi, Aurélien Bellet, and Daniel Gatica-Perez. 2022. GAP: Differentially private graph neural networks with aggregation perturbation. arXiv preprint arXiv:2203.00949 (2022).
[32]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models. In Network and Distributed System Security Symposium. The Internet Society.
[33]
Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018).
[34]
Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, and Gianluca Stringhini. 2022a. Finding MNEMON: Reviving memories of node embeddings. In ACM SIGSAC Conference on Computer and Communications Security. 2643--2657.
[35]
Yun Shen, Xinlei He, Yufei Han, and Yang Zhang. 2022b. Model stealing attacks against inductive graph neural networks. In IEEE Symposium on Security and Privacy. 1175--1192.
[36]
Congzheng Song and Vitaly Shmatikov. 2020. Overlearning reveals sensitive attributes. In International Conference on Learning Representations. OpenReview.net.
[37]
Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy risks of securing machine learning models against adversarial examples. In ACM SIGSAC Conference on Computer and Communications Security. 241--257.
[38]
Lichao Sun, Yingtong Dou, Carl Yang, Kai Zhang, Ji Wang, S Yu Philip, Lifang He, and Bo Li. 2022. Adversarial attack and defense on graph data: A survey. IEEE Transactions on Knowledge and Data Engineering (2022).
[39]
Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2020. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In The Web Conference. ACM / IW3C2, 673--683.
[40]
Shuchang Tao, Qi Cao, Huawei Shen, Junjie Huang, Yunfan Wu, and Xueqi Cheng. 2021. Single node injection attack against graph neural networks. In ACM International Conference on Information and Knowledge Management. 1794--1803.
[41]
Amanda L Traud, Peter J Mucha, and Mason A Porter. 2012. Social structure of facebook networks. Physica A: Statistical Mechanics and its Applications, Vol. 391, 16 (2012), 4165--4180.
[42]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. OpenReview.net.
[43]
Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, and Qinghua Zheng. 2020. Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery, Vol. 34, 5 (2020), 1363--1389.
[44]
Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, and Felix Wu. 2018. Attack graph convolutional networks by adding fake nodes. arXiv preprint arXiv:1810.10751 (2018).
[45]
Xiuling Wang and Wendy Hui Wang. 2022. Group property inference attacks against graph neural networks. In ACM SIGSAC Conference on Computer and Communications Security. 2871--2884.
[46]
Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, and Jun Zhu. 2022. Cluster Attack: Query-based adversarial attacks on graph with graph-dependent priors. In International Joint Conference on Artificial Intelligence. ijcai.org, 768--775.
[47]
Fan Wu, Yunhui Long, Ce Zhang, and Bo Li. 2022. LinkTeller: Recovering private edges from graph neural networks via influence analysis. In IEEE Symposium on Security and Privacy. 2005--2024.
[48]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, Vol. 32, 1 (2020), 4--24.
[49]
Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, Vol. 17, 2 (2020), 151--178.
[50]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In International Conference on Learning Representations. OpenReview.net.
[51]
Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning. PMLR, 40--48.
[52]
Ziqi Yang, Jiyi Zhang, Ee-Chien Chang, and Zhenkai Liang. 2019. Neural network inversion in adversarial setting via background knowledge alignment. In ACM SIGSAC Conference on Computer and Communications Security. 225--240.
[53]
Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 4805--4815.
[54]
Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 5171--5181.
[55]
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018. An end-to-end deep learning architecture for graph classification. In AAAI Conference on Artificial Intelligence. AAAI Press, 4438--4445.
[56]
Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, and Matthew Hirn. 2021a. MagNet: A neural network for directed graphs. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 27003--27015.
[57]
Xiang Zhang and Marinka Zitnik. 2020. GNNGuard: Defending graph neural networks against adversarial attacks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 9263--9275.
[58]
Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, and Yang Zhang. 2022a. Inference attacks against graph neural networks. In USENIX Security Symposium. 1--18.
[59]
Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2022b. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, Vol. 34, 1 (2022), 249--270.
[60]
Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren Liu, and Enhong Chen. 2021b. GraphMI: Extracting private graph data from graph neural networks. In International Joint Conference on Artificial Intelligence. ijcai.org, 3749--3755.
[61]
Yadi Zhou, Fei Wang, Jian Tang, Ruth Nussinov, and Feixiong Cheng. 2020. Artificial intelligence in COVID-19 drug repurposing. The Lancet Digital Health, Vol. 2, 12 (2020), e667--e676.
[62]
Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. 2020. Beyond homophily in graph neural networks: Current limitations and effective designs. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 7793--7804.
[63]
Marinka Zitnik, Monica Agrawal, and Jure Leskovec. 2018. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, Vol. 34, 13 (2018), i457--i466.
[64]
Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, and Jie Tang. 2021. TDGIA: Effective injection attacks on graph neural networks. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2461--2471.
[65]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2847--2856.

Index Terms

  1. Devil in Disguise: Breaching Graph Neural Networks Privacy through Infiltration

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
      November 2023
      3722 pages
      ISBN:9798400700507
      DOI:10.1145/3576915
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 November 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. graph neural network
      2. inference attack
      3. machine learning privacy

      Qualifiers

      • Research-article

      Conference

      CCS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 765
        Total Downloads
      • Downloads (Last 12 months)466
      • Downloads (Last 6 weeks)22
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media