skip to main content
10.1145/3411501.3419424acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Adversarial Detection on Graph Structured Data

Published: 09 November 2020 Publication History

Abstract

Graph Neural Networks (GNNs) has achieved tremendous development on perceptual tasks in recent years, such as node classification, graph classification, link prediction, etc. However, recent studies show that deep learning models of GNNs are incredibly vulnerable to adversarial attacks, so enhancing the robustness of such models remains a significant challenge. In this paper, we propose a subgraph based adversarial sample detection against adversarial perturbations. To the best of our knowledge, this is the first work on the adversarial detection in the deep-learning graph classification models, using the Subgraph Networks (SGN) to restructure the graph's features. Moreover, we develop the joint adversarial detector to cope with the more complicated and unknown attacks. Specifically, we first explain how adversarial attacks can easily fool the models and then show that the SGN can facilitate the distinction of adversarial examples generated by state-of-the-art attacks. We experiment on five real-world graph datasets using three different kinds of attack strategies on graph classification. Our empirical results show the effectiveness of our detection method and further explain the SGN's capacity to tell apart malicious graphs.

Supplementary Material

MP4 File (3411501.3419424.mp4)
Graph Neural Networks (GNNs) has achieved tremendous development on perceptual tasks in recent years, such as node classification, graph classification, link prediction, etc. However, recent studies show that deep learning models of GNNs are incredibly vulnerable to adversarial attacks, so enhancing the robustness of such models remains a significant challenge. In this paper, we propose a subgraph based adversarial sample detection against adversarial perturbations. To the best of our knowledge, this is the first work on the adversarial detection in the deep-learning graph classification models, using the Subgraph Networks (SGN) to restructure the graph's features. Moreover, we develop the joint adversarial detector to cope with the more complicated and unknown attacks.

References

[1]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Certifiable Robustness to Graph Perturbations. In Advances in Neural Information Processing Systems. 8319--8330.
[2]
Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018).
[3]
Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Feifei Li, and Qi Xuan. 2019. Timeaware Gradient Attack on Dynamic Network Link Prediction. arXiv preprint arXiv:1911.10561 (2019).
[4]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, LinWang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371 (2018).
[5]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems. 3844--3852.
[6]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in neural information processing systems. 1024--1034.
[7]
Vassilis N Ioannidis, Dimitris Berberidis, and Georgios B Giannakis. 2019. Graphsac: Detecting anomalies in large-scale graphs. arXiv preprint arXiv:1910.09589 (2019).
[8]
Thomas N Kipf and MaxWelling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[9]
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493 (2015).
[10]
Haoteng Tang, Guixiang Ma, Yurong Chen, Lei Guo, Wei Wang, Bo Zeng, and Liang Zhan. 2020. Adversarial Attack on Hierarchical Graph Pooling Neural Networks. arXiv preprint arXiv:2005.11560 (2020).
[11]
Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, and Suhang Wang. 2020. Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In Proceedings of the 13th International Conference on Web Search and Data Mining. 600--608.
[12]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[13]
Xiaojun Xu, Yue Yu, Bo Li, Le Song, Chengfeng Liu, and Carl Gunter. 2018. Characterizing Malicious Edges targeting on Graph Neural Networks. (2018).
[14]
Qi Xuan, Jinhuan Wang, Minghao Zhao, Junkun Yuan, Chenbo Fu, Zhongyuan Ruan, and Guanrong Chen. 2019. Subgraph networks with application to structural feature space expansion. IEEE Transactions on Knowledge and Data Engineering (2019).
[15]
Shanqing Yu, Minghao Zhao, Chenbo Fu, Jun Zheng, Huimin Huang, Xincheng Shu, Qi Xuan, and G Chen. 2019. Target defense against link-prediction-based attacks via evolutionary perturbations. IEEE Transactions on Knowledge and Data Engineering (2019).
[16]
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2020. Backdoor Attacks to Graph Neural Networks. arXiv preprint arXiv:2006.11165 (2020).
[17]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1399--1407.
[18]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847--2856.
[19]
Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412 (2019).

Cited By

View all
  • (2024)Verifying message-passing neural networks via topology-based bounds tighteningProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692814(18489-18514)Online publication date: 21-Jul-2024
  • (2024)Detecting Targets of Graph Adversarial Attacks With Edge and Feature PerturbationsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.334464211:3(3218-3231)Online publication date: Jun-2024
  • (2023)Multi-Order-Content-Based Adaptive Graph Attention Network for Graph Node ClassificationSymmetry10.3390/sym1505103615:5(1036)Online publication date: 7-May-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
PPMLP'20: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice
November 2020
75 pages
ISBN:9781450380881
DOI:10.1145/3411501
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. data mining
  2. graph classification
  3. joint adversarial detection
  4. subgraph network

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • Natural Science Foundation of Zhejiang Provincial
  • Major Special Funding for Science and Technology Innovation 2025 in Ningbo

Conference

CCS '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)49
  • Downloads (Last 6 weeks)4
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Verifying message-passing neural networks via topology-based bounds tighteningProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692814(18489-18514)Online publication date: 21-Jul-2024
  • (2024)Detecting Targets of Graph Adversarial Attacks With Edge and Feature PerturbationsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.334464211:3(3218-3231)Online publication date: Jun-2024
  • (2023)Multi-Order-Content-Based Adaptive Graph Attention Network for Graph Node ClassificationSymmetry10.3390/sym1505103615:5(1036)Online publication date: 7-May-2023
  • (2021)Adversarial attacks on graph classification via Bayesian optimisationProceedings of the 35th International Conference on Neural Information Processing Systems10.5555/3540261.3540796(6983-6996)Online publication date: 6-Dec-2021
  • (2021)Adversarial Defenses on Graphs: Towards Increasing the Robustness of AlgorithmsGraph Data Mining10.1007/978-981-16-2609-8_6(121-154)Online publication date: 26-Apr-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media