skip to main content
research-article

Adversarial Attacks and Defenses on Graphs

Published: 17 January 2021 Publication History

Abstract

Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.

References

[1]
L. A. Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages 36--43, 2005.
[2]
C. C. Aggarwal, H. Wang, et al. Managing and mining graph data, volume 40. Springer, 2010.
[3]
P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez- Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
[4]
A. Bojchevski and S. G¨unnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017.
[5]
A. Bojchevski and S. G¨unnemann. Adversarial attacks on node embeddings via graph poisoning. arXiv preprint arXiv:1809.01093, 2018.
[6]
A. Bojchevski and S. G¨unnemann. Certifiable robustness to graph perturbations. In Advances in Neural Information Processing Systems, pages 8317--8328, 2019.
[7]
A. Bojchevski, J. Klicpera, and S. G¨unnemann. Efficient robustness certificates for discrete data: Sparsityaware randomized smoothing for graphs, images and more. arXiv preprint arXiv:2008.12952, 2020.
[8]
A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787--2795, 2013.
[9]
H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang. The general blackbox attack method for graph neural networks. arXiv preprint arXiv:1908.01297, 2019.
[10]
J. Chen, L. Chen, Y. Chen, M. Zhao, S. Yu, Q. Xuan, and X. Yang. Ga-based q-attack on community detection. IEEE Transactions on Computational Social Systems, 6(3):491--503, 2019.
[11]
J. Chen, Z. Shi, Y. Wu, X. Xu, and H. Zheng. Link prediction adversarial attack. arXiv preprint arXiv:1810.01110, 2018.
[12]
J. Chen, Y. Wu, X. Lin, and Q. Xuan. Can adversarial network attack be defended? arXiv preprint arXiv:1903.05994, 2019.
[13]
J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797, 2018.
[14]
X. Chen, C. Liu, B. Li, K. Lu, and D. Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
[15]
Y. Chen, Y. Nadji, A. Kountouras, F. Monrose, R. Perdisci, M. Antonakakis, and N. Vasiloglou. Practical attacks against graph-based clustering. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1125--1142, 2017.
[16]
J. M. Cohen, E. Rosenfeld, and J. Z. Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.
[17]
H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song. Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371, 2018.
[18]
Q. Dai, X. Shen, L. Zhang, Q. Li, and D. Wang. Adversarial training methods for network embedding. In The World Wide Web Conference, pages 329--339, 2019.
[19]
Z. Deng, Y. Dong, and J. Zhu. Batch virtual adversarial training for graph convolutional networks. arXiv preprint arXiv:1902.09192, 2019.
[20]
Y. Dou, G. Ma, P. S. Yu, and S. Xie. Robust spammer detection by nash reinforcement learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 924--933, 2020.
[21]
N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search Mining, pages 169--177, 2020.
[22]
F. Feng, X. He, J. Tang, and T.-S. Chua. Graph adversarial training: Dynamically regularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering, 2019.
[23]
L. Franceschi, M. Niepert, M. Pontil, and X. He. Learning discrete structures for graph neural networks. arXiv preprint arXiv:1903.11960, 2019.
[24]
J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
[25]
I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[26]
W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
[27]
X. Huang, J. Li, and X. Hu. Label informed attributed network embedding. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 731--739, 2017.
[28]
V. N. Ioannidis, D. Berberidis, and G. B. Giannakis. Graphsac: Detecting anomalies in large-scale graphs. arXiv preprint arXiv:1910.09589, 2019.
[29]
G. Jeh and J. Widom. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web, pages 271--279, 2003.
[30]
J. Jia, B. Wang, X. Cao, and N. Z. Gong. Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. arXiv preprint arXiv:2002.03421, 2020.
[31]
B. Jiang, Z. Zhang, D. Lin, J. Tang, and B. Luo. Semisupervised learning with graph learning-convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11313--11320, 2019.
[32]
H. Jin and X. Zhang. Latent adversarial training of graph convolution networks. In ICML Workshop on Learning and Reasoning with GraphStructured Representations, 2019.
[33]
W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang. Graph structure learning for robust graph neural networks. arXiv preprint arXiv:2005.10203, 2020.
[34]
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
[35]
T. N. Kipf and M. Welling. Variational graph autoencoders. arXiv preprint arXiv:1611.07308, 2016.
[36]
J. Klicpera, A. Bojchevski, and S. G¨unnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018.
[37]
L. Landrieu and M. Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4558--4567, 2018.
[38]
J. Li, H. Zhang, Z. Han, Y. Rong, H. Cheng, and J. Huang. Adversarial attack on community detection by hiding individuals, 2020.
[39]
Y. Li, S. Bai, C. Xie, Z. Liao, X. Shen, and A. Yuille. Regional homogeneity: Towards learning transferable universal adversarial perturbations against defenses. arXiv preprint arXiv:1904.00979, 2019.
[40]
Y. Li, S. Bai, Y. Zhou, C. Xie, Z. Zhang, and A. L. Yuille. Learning transferable adversarial examples via ghost networks. In AAAI, pages 11458--11465, 2020.
[41]
Y. Li, W. Jin, H. Xu, and J. Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149, 2020.
[42]
Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu. Learning entity and relation embeddings for knowledge graph completion. In Twentyninth AAAI conference on artificial intelligence, 2015.
[43]
X. Liu, S. Si, X. Zhu, Y. Li, and C.-J. Hsieh. A unified framework for data poisoning attack to graph-based semi-supervised learning. arXiv preprint arXiv:1910.14147, 2019.
[44]
J. Ma, S. Ding, and Q. Mei. Black-box adversarial attacks on graph neural networks with limited node access. arXiv preprint arXiv:2006.05057, 2020.
[45]
T. Ma, C. Xiao, J. Zhou, and F. Wang. Drug similarity integration through attentive multi-view graph autoencoders. arXiv preprint arXiv:1804.10850, 2018.
[46]
Y. Ma, S. Wang, L. Wu, and J. Tang. Attacking graph convolutional networks via rewiring. arXiv preprint arXiv:1906.03750, 2019.
[47]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
[48]
D. Marcheggiani and I. Titov. Encoding sentences with graph convolutional networks for semantic role labeling. arXiv preprint arXiv:1703.04826, 2017.
[49]
M. McPherson, L. Smith-Lovin, and J. M. Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415--444, 2001.
[50]
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[51]
B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701--710, 2014.
[52]
T. Pham, T. Tran, D. Phung, and S. Venkatesh. Column networks for collective classification. arXiv preprint arXiv:1609.04508, 2016.
[53]
J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 459--467, 2018.
[54]
A. Said, E. W. De Luca, and S. Albayrak. How social relationships affect user similarities.
[55]
P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93--93, 2008.
[56]
L. Sun, J.Wang, P. S. Yu, and B. Li. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528, 2018.
[57]
Y. Sun, S. Wang, X. Tang, T.-Y. Hsieh, and V. Honavar. Node injection attacks on graphs via reinforcement learning. arXiv preprint arXiv:1909.06543, 2019.
[58]
M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 3319--3328. JMLR. org, 2017.
[59]
J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067--1077, 2015.
[60]
X. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and Wang. Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 600--608, 2020.
[61]
S. Tao, H. Shen, Q. Cao, L. Hou, and X. Cheng. Adversarial immunization for improving certifiable robustness on graphs. arXiv preprint arXiv:2007.09647, 2020.
[62]
Veli?ckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
[63]
P. Veli?ckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. 2018.
[64]
B. Wang and N. Z. Gong. Attacking graph-based classification via manipulating the graph structure. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 2023-- 2040, 2019.
[65]
J. Jia, X. Cao, and N. Z. Gong. Certified robustness of graph neural networks against adversarial structural perturbation. arXiv preprint arXiv:2008.10715, 2020.
[66]
B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707--723. IEEE, 2019.
[67]
J.Wang, M. Luo, F. Suya, J. Li, Z. Yang, and Q. Zheng. Scalable attack on graph data by injecting vicious nodes. arXiv preprint arXiv:2004.13825, 2020.
[68]
H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu. Heterogeneous graph attention network. In The World Wide Web Conference, pages 2022--2032, 2019.
[69]
X. Wang, X. Liu, and C.-J. Hsieh. Graphdefense: Towards robust graph convolutional networks, 2019.
[70]
[71]
M. Waniek, T. P. Michalak, M. J. Wooldridge, and T. Rahwan. Hiding individuals and communities in a social network. Nature Human Behaviour, 2(2):139--147, 2018.
[72]
H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu. Adversarial examples for graph data: deep insights into attack and defense. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4816--4823. AAAI Press, 2019.
[73]
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
[74]
Z. Xi, R. Pang, S. Ji, and T. Wang. Graph backdoor. arXiv preprint arXiv:2006.11890, 2020.
[75]
H. Y. Ma, H. Liu, D. Deb, H. Liu, J. Tang, and A. Jain. Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprint arXiv:1909.08072, 2019.
[76]
Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214, 2019.
[77]
K. C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka. Representation learning on graphs with jumping knowledge networks. arXiv preprint arXiv:1806.03536, 2018.
[78]
X. Xu, Y. Yu, B. Li, L. Song, C. Liu, and C. Gunter. Characterizing malicious edges targeting on graph neural networks. 2018.
[79]
X. Zang, Y. Xie, J. Chen, and B. Yuan. Graph universal adversarial attacks: A few bad actors ruin graph learning models, 2020.
[80]
A. Zhang and J. Ma. Defensevgae: Defending against adversarial attacks on graph data via a variational graph autoencoder. arXiv preprint arXiv:2006.08900, 2020.
[81]
H. Zhang, T. Zheng, J. Gao, C. Miao, L. Su, Y. Li, and K. Ren. Towards data poisoning attack against knowledge graph embedding. ArXiv, abs/1904.12052, 2019.
[82]
X. Zhang and M. Zitnik. Gnnguard: Defending graph neural networks against adversarial attacks. arXiv preprint arXiv:2006.08149, 2020.
[83]
Z. Zhang, J. Jia, B. Wang, and N. Z. Gong. Backdoor attacks to graph neural networks. arXiv preprint arXiv:2006.11165, 2020.
[84]
J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
[85]
L. Li, N. Cao, L. Ying, and H. Tong. Admiring: Adversarial multi-network mining. In 2019 IEEE International Conference on Data Mining (ICDM), pages 1522--1527. IEEE, 2019.
[86]
Q. Zhou, Y. Ren, T. Xia, L. Yuan, and L. Chen. Data poisoning attacks on graph convolutional matrix completion. In Algorithms and Architectures for Parallel Processing, 2020.
[87]
D. Zhu, Z. Zhang, P. Cui, and W. Zhu. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1399--1407, 2019.
[88]
X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002.
[89]
D. Z¨ugner, A. Akbarnejad, and S. G¨unnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847--2856, 2018.
[90]
D. Z¨ugner, O. Borchert, A. Akbarnejad, and S. Guennemann. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5):1-- 31, 2020.
[91]
D. Z¨ugner and S. G¨unnemann. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412, 2019.
[92]
D. Z¨ugner and S. G¨unnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 246--256, 2019.
[93]
D. Z¨ugner and S. G¨unnemann. Certifiable robustness of graph convolutional networks under structure perturbations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1656--1665, 2020.

Cited By

View all
  • (2025)A Unified Optimization-Based Framework for Certifiably Robust and Fair Graph Neural NetworksIEEE Transactions on Signal Processing10.1109/TSP.2024.351409173(83-98)Online publication date: 1-Jan-2025
  • (2025)Label Guided Graph Optimized Convolutional Network for Semi-Supervised LearningIEEE Transactions on Signal and Information Processing over Networks10.1109/TSIPN.2025.352596111(71-84)Online publication date: 2025
  • (2025)ATPF: An Adaptive Temporal Perturbation Framework for Adversarial Attacks on Temporal Knowledge GraphIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.351068937:3(1091-1104)Online publication date: 1-Mar-2025
  • Show More Cited By
  1. Adversarial Attacks and Defenses on Graphs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM SIGKDD Explorations Newsletter
    ACM SIGKDD Explorations Newsletter  Volume 22, Issue 2
    December 2020
    50 pages
    ISSN:1931-0145
    EISSN:1931-0153
    DOI:10.1145/3447556
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 January 2021
    Published in SIGKDD Volume 22, Issue 2

    Check for updates

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)231
    • Downloads (Last 6 weeks)32
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)A Unified Optimization-Based Framework for Certifiably Robust and Fair Graph Neural NetworksIEEE Transactions on Signal Processing10.1109/TSP.2024.351409173(83-98)Online publication date: 1-Jan-2025
    • (2025)Label Guided Graph Optimized Convolutional Network for Semi-Supervised LearningIEEE Transactions on Signal and Information Processing over Networks10.1109/TSIPN.2025.352596111(71-84)Online publication date: 2025
    • (2025)ATPF: An Adaptive Temporal Perturbation Framework for Adversarial Attacks on Temporal Knowledge GraphIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.351068937:3(1091-1104)Online publication date: 1-Mar-2025
    • (2025)Graph Neural Networks With Adaptive StructuresIEEE Journal of Selected Topics in Signal Processing10.1109/JSTSP.2024.348589219:1(181-194)Online publication date: Jan-2025
    • (2025)Explainability-based adversarial attack on graphs through edge perturbationKnowledge-Based Systems10.1016/j.knosys.2024.112895310(112895)Online publication date: Feb-2025
    • (2025)Topological Vulnerability-Based Imperceptible Node Injection Attack Against Dynamic Graph Neural NetworkComputing and Combinatorics10.1007/978-981-96-1093-8_45(551-562)Online publication date: 20-Feb-2025
    • (2024)Evolutionary Perturbation Attack on Temporal Link PredictionJournal of the Physical Society of Japan10.7566/JPSJ.93.07400293:7Online publication date: 15-Jul-2024
    • (2024)AttackGNNProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3698905(73-90)Online publication date: 14-Aug-2024
    • (2024)On which nodes does GCN fail? enhancing GCN from the node perspectiveProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692879(20073-20095)Online publication date: 21-Jul-2024
    • (2024)TPoison: Data-Poisoning Attack against GNN-Based Social Trust ModelMathematics10.3390/math1212181312:12(1813)Online publication date: 11-Jun-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media