skip to main content
10.1145/3674399.3674443acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesacm-turcConference Proceedingsconference-collections
research-article

An Imperceptible and Owner-unique Watermarking Method for Graph Neural Networks

Published: 30 July 2024 Publication History

Abstract

Graph Neural Networks (GNNs) have found widespread application across various domains, encompassing but not limited to social network analysis, recommendation systems, and fraud detection. Meanwhile, training a sophisticated GNN model is an extremely resource-intensive process. Therefore, protecting the intellectual property of GNN model becomes essential. However, limited research has been conducted on the protection of intellectual property for GNNs. Additionally, current few watermarking methods employed in the context of GNNs overlook the potential vulnerabilities posed by evasion attack and fraudulent declaration attack. To fill this gap, in this paper, we propose a novel GNN watermarking method utilizing a bi-level optimization framework to embed an imperceptible and owner-unique watermark into GNNs. The proposed method achieves indistinguishability and uniqueness of the injected watermark, establishing a secure mechanism for intellectual property protection for GNNs. We evaluate our method on two benchmark datasets and three GNN models. The results indicate that our method effectively verifies model ownership with minimal impact on their primary task performance. Furthermore, the method exhibits remarkable resilience against model fine-tuning and pruning attacks, as well as security against evasion attacks and fraudulent ownership claims.

References

[1]
Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. 2018. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium. 1615–1631.
[2]
Huili Chen, Bita Darvish Rouhani, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. 2019. DeepMarks: A secure fingerprinting framework for digital rights management of deep learning models. In Proceedings of the International Conference on Multimedia Retrieval. 105–113.
[3]
Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, and James Cheng. 2022. Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. CoRR abs/2202.08057.
[4]
Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. 2023. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference. 2263–2273.
[5]
Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. 2019. Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. 485–497.
[6]
Lixin Fan, Kam Woh Ng, and Chee Seng Chan. 2019. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks. Advances in Neural Information Processing Systems 32 (2019), 4716–4725.
[7]
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing. 6645–6649.
[8]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems 30 (2017), 1024–1034.
[9]
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in Neural Information Processing Systems 28 (2015), 1135–1143.
[10]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[11]
Dorjan Hitaj and Luigi V. Mancini. 2018. Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques. CoRR abs/1809.00615 (2018).
[12]
Jianguo Jiang, Jiuming Chen, Tianbo Gu, Kim-Kwang Raymond Choo, Chao Liu, Min Yu, Weiqing Huang, and Prasant Mohapatra. 2019. Anomaly detection with graph convolutional networks for insider threat and fraud detection. In IEEE Military Communications Conference (MILCOM). 109–114.
[13]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations. 1–15.
[14]
Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations. 1–14.
[15]
Zheng Li, Chengyu Hu, Yang Zhang, and Shanqing Guo. 2019. How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of DNN. In Proceedings of the 35th Annual Computer Security Applications Conference. 126–137.
[16]
Charles X Ling, Jin Huang, and Harry Zhang. 2003. AUC: a better measure than accuracy in comparing learning algorithms. In Advances in Artificial Intelligence, 16th Conference of the Canadian Society for Computational Studies of Intelligence. 329–341.
[17]
Risheng Liu, Jiaxin Gao, Jin Zhang, Deyu Meng, and Zhouchen Lin. 2021. Investigating bi-level optimization for learning and vision from a unified perspective: A survey and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 12 (2021), 10045–10067.
[18]
Kevin P Murphy. 2012. Machine learning: a probabilistic perspective. MIT press.
[19]
Nikiforos Pittaras, Foteini Markatopoulou, Vasileios Mezaris, and Ioannis Patras. 2017. Comparison of fine-tuning and extension strategies for deep convolutional neural networks. In 23rd International Conference of the MultiMedia Modeling. 102–114.
[20]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI magazine 29, 3 (2008), 93–93.
[21]
Sebastian Szyller, Buse Gul Atli, Samuel Marchal, and N Asokan. 2021. Dawn: Dynamic adversarial watermarking of neural networks. In Proceedings of the 29th ACM International Conference on Multimedia. 4417–4425.
[22]
Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2017. Embedding watermarks into deep neural networks. In Proceedings of ACM International Conference on Multimedia Retrieval. 269–277.
[23]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, 2017. Graph attention networks. stat 1050, 20 (2017), 10–48550.
[24]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 1 (2020), 4–24.
[25]
Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21). 1523–1540.
[26]
Jing Xu, Stefanos Koffas, Oğuzhan Ersoy, and Stjepan Picek. 2023. Watermarking graph neural networks based on backdoor attacks. In IEEE 8th European Symposium on Security and Privacy. 1179–1197.
[27]
Mingfu Xue, Yushu Zhang, Jian Wang, and Weiqiang Liu. 2021. Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations. IEEE Transactions on Artificial Intelligence 3, 6 (2021), 908–923.
[28]
Peng Yang, Yingjie Lao, and Ping Li. 2021. Robust watermarking for deep neural networks via bi-level optimization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14841–14850.
[29]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 974–983.
[30]
Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of Asia Conference on Computer and Communications Security. 159–172.
[31]
Xiangyu Zhao, Hanzhou Wu, and Xinpeng Zhang. 2021. Watermarking graph neural networks by random graphs. In 9th International Symposium on Digital Forensics and Security. 1–6.
[32]
Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Günnemann. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data 14, 5 (2020), 1–31.

Index Terms

  1. An Imperceptible and Owner-unique Watermarking Method for Graph Neural Networks

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ACM-TURC '24: Proceedings of the ACM Turing Award Celebration Conference - China 2024
      July 2024
      261 pages
      ISBN:9798400710117
      DOI:10.1145/3674399
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 July 2024

      Check for updates

      Author Tags

      1. Backdoor.
      2. Bi-level optimization framework
      3. Graph neural networks
      4. Intellectual property protection
      5. Watermarking

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      ACM-TURC '24

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 65
        Total Downloads
      • Downloads (Last 12 months)65
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 27 Feb 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media