skip to main content
10.1145/3616855.3635790acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Maximizing Malicious Influence in Node Injection Attack

Published: 04 March 2024 Publication History

Abstract

Graph neural networks (GNNs) have achieved impressive performance in various graph-related tasks. However, recent studies have found that GNNs are vulnerable to adversarial attacks. Node injection attacks (NIA) become an emerging scenario of graph adversarial attacks, where the attacks are performed by injecting malicious nodes into the original graph instead of directly modifying it. In this paper, we focus on a more realistic scenario of NIA, where the attacker is only allowed to inject a small number of nodes to degrade the performance of GNNs with very limited information. We analyze the susceptibility of nodes, and based on this we propose a global node injection attack framework, MaxiMal, to maximize malicious information under a strict black-box setting. MaxiMal first introduces a susceptible-reverse influence sampling strategy to select neighbor nodes that are able to spread malicious information widely. Then contrastive loss is introduced to optimize the objective by updating the edges and features of the injected nodes. Extensive experiments on three benchmark datasets demonstrate the superiority of our proposed MaxiMal over the state-of-the-art approaches.

References

[1]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning. PMLR, 695--704.
[2]
Christian Borgs, Michael Brautbar, Jennifer Chayes, and Brendan Lucier. 2014. Maximizing social influence in nearly optimal time. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. SIAM, 946--957.
[3]
Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, and Qi Xuan. 2020a. Mga: momentum gradient attack on network. IEEE Transactions on Computational Social Systems, Vol. 8, 1 (2020), 99--109.
[4]
Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018).
[5]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020b. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597--1607.
[6]
Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, and James Cheng. 2022. Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25--29, 2022. OpenReview.net. https://openreview.net/forum?id=wkMG8cdvh7-
[7]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International conference on machine learning. PMLR, 1115--1124.
[8]
Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. 2020. A Fair Comparison of Graph Neural Networks for Graph Classification. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https://openreview.net/forum?id=HygDF6NFPB
[9]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The world wide web conference. 417--426.
[10]
Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, and Uri Alon. 2022. Single-node attacks for fooling graph neural networks. Neurocomputing, Vol. 513 (2022), 1--12.
[11]
C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. CiteSeer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries. 89--98.
[12]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017).
[13]
Mingxuan Ju, Yujie Fan, Chuxu Zhang, and Yanfang Ye. 2023. Let graph be the go board: Gradient-free node injection attack for graph neural networks via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 4383--4390.
[14]
David Kempe, Jon Kleinberg, and Éva Tardos. 2003. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. 137--146.
[15]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[16]
Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24--26, 2017, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=SJU4ayYgl
[17]
Jiaqi Ma, Junwei Deng, and Qiaozhu Mei. 2022. Adversarial attack on graph neural networks as an influence maximization problem. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 675--685.
[18]
Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards more practical adversarial attacks on graph neural networks. Advances in neural information processing systems, Vol. 33 (2020), 4756--4766.
[19]
Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval, Vol. 3 (2000), 127--163.
[20]
Galileo Namata, Ben London, Lise Getoor, Bert Huang, and U Edu. 2012. Query-driven active surveying for collective classification. In 10th international workshop on mining and learning with graphs, Vol. 8. 1.
[21]
Matthew Richardson and Pedro Domingos. 2002. Mining knowledge-sharing sites for viral marketing. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. 61--70.
[22]
Bastian Rieck, Christian Bock, and Karsten Borgwardt. 2019. A persistent weisfeiler-lehman procedure for graph classification. In International Conference on Machine Learning. PMLR, 5448--5458.
[23]
Ansh Kumar Sharma, Rahul Kukreja, Mayank Kharbanda, and Tanmoy Chakraborty. 2023. Node Injection for Class-specific Network Poisoning. arXiv preprint arXiv:2301.12277 (2023).
[24]
Lichao Sun, Yingtong Dou, Carl Yang, Kai Zhang, Ji Wang, S Yu Philip, Lifang He, and Bo Li. 2022. Adversarial attack and defense on graph data: A survey. IEEE Transactions on Knowledge and Data Engineering (2022).
[25]
Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2020a. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020. 673--683.
[26]
Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2020b. Non-target-specific node injection attacks on graph neural networks: A hierarchical reinforcement learning approach. In Proc. WWW, Vol. 3.
[27]
Youze Tang, Xiaokui Xiao, and Yanchen Shi. 2014. Influence maximization: Near-optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data. 75--86.
[28]
Shuchang Tao, Qi Cao, Huawei Shen, Junjie Huang, Yunfan Wu, and Xueqi Cheng. 2021. Single node injection attack against graph neural networks. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 1794--1803.
[29]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, Vol. 9, 11 (2008).
[30]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017. Graph attention networks. stat, Vol. 1050, 20 (2017), 10--48550.
[31]
Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, and Qinghua Zheng. 2020. Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery, Vol. 34 (2020), 1363--1389.
[32]
Jin Wei, Li Yaxin, Xu Han, Wang Yiqi, Ji Shuiwang, and Aggarwal Charu. 2020. Adversarial Attacks and Defenses on Graphs: A Review, a Tool, and Empirical Studies. In arXiv: 2003.00653 (Dec. 00653.
[33]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019a. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 6861--6871. https://proceedings.mlr.press/v97/wu19e.html
[34]
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019b. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610 (2019).
[35]
Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, and Xueqi Cheng. 2019b. Graph wavelet neural network. arXiv preprint arXiv:1904.07785 (2019).
[36]
Jiarong Xu, Yizhou Sun, Xin Jiang, Yanhao Wang, Chunping Wang, Jiangang Lu, and Yang Yang. 2022. Blindfolded attackers still threatening: Strict black-box adversarial attacks on graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 4299--4307.
[37]
Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019a. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214 (2019).
[38]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[39]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 974--983.
[40]
Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in neural information processing systems, Vol. 33 (2020), 5812--5823.
[41]
Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, and Guandong Xu. 2022. Unsupervised graph poisoning attack via contrastive loss back-propagation. In Proceedings of the ACM Web Conference 2022. 1322--1330.
[42]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 1399--1407.
[43]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
[44]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021. 2069--2080.
[45]
Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, and Jie Tang. 2021. Tdgia: Effective injection attacks on graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2461--2471.
[46]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2847--2856.
[47]
Daniel Zü gner and Stephan Gü nnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6--9, 2019. OpenReview.net. https://openreview.net/forum?id=Bylnx209YX

Index Terms

  1. Maximizing Malicious Influence in Node Injection Attack

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '24: Proceedings of the 17th ACM International Conference on Web Search and Data Mining
    March 2024
    1246 pages
    ISBN:9798400703713
    DOI:10.1145/3616855
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 March 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial attack
    2. graph contrastive learning
    3. graph neural networks
    4. influence maximization

    Qualifiers

    • Research-article

    Funding Sources

    • National Natural Science Foundation of China

    Conference

    WSDM '24

    Acceptance Rates

    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 285
      Total Downloads
    • Downloads (Last 12 months)285
    • Downloads (Last 6 weeks)29
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media