ABSTRACT
Watermarking neural networks (NNs) for ownership protection has received considerable attention recently. Resisting both model pruning and fine-tuning is commonly considered to evaluate the robustness of a watermarked NN. However, the rationale behind such a robustness is still relatively unexplored in the literature. In this paper, we study this problem to propose a so-called sparse trigger pattern (STP) guided deep learning model watermarking method. We provide empirical evidence to show that trigger patterns are able to make the distribution of model parameters compact, and thus exhibit interpretable resilience to model pruning and fine-tuning. We find the effect of STP can also be technically interpreted as the first layer dropout. Extensive experiments demonstrate the robustness of our method.
- Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. 2018. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. In 27th USENIX Security Symposium (USENIX Security). 1615--1631.Google Scholar
- Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gil. 2017. Adversarial Patch. In NIPS Workshop.Google Scholar
- Huili Chen, Bita Darvish Rouhani, and Farinaz Koushanfar. 2019. BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks. In Asia CCS.Google Scholar
- Xinyun Chen, Wenxiao Wang, Chris Bender, Yiming Ding, Ruoxi Jia, Bo Li, and Dawn Song. 2021. REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data. In Asia CCS. 321--335.Google Scholar
- A. Cui, J. Peng, H. Li, M. Wen, and J. Jia. 2019. Iterative thresholding algorithm based on non-convex method for modified ℓp-norm regularization minimization. J. Comput. Appl. Math. 347 (Feb. 2019), 173--180.Google ScholarCross Ref
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In IEEE Computer Vision and Pattern Recognition (CVPR).Google Scholar
- D. L. Donoho. 2006. Compressed sensing. IEEE Trans. Inf. Theory 52, 4 (2006), 1289--1306.Google ScholarDigital Library
- Lixin Fan, Kam Woh Ng, and Chee Seng Chan. 2019. Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks. In NIPS.Google Scholar
- T. Gu, B. Dolan-Gavitt, and S. Garg. 2017. Badnets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. In CoRR abs/1708.06733.Google Scholar
- S. Han, H. Mao, and W. J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In ICLR.Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. In CVPR. 770--778.Google Scholar
- Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. In https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.Google Scholar
- Hanwen Liu, Zhenyu Weng, and Yuesheng Zhu. 2021. Watermarking Deep Neural Networks with Greedy Residuals. In ICML.Google Scholar
- Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In CVPR. 1765--1773.Google Scholar
- F. Petitcolas. 2000. Watermarking Schemes Evaluation. IEEE Signal Processing Magazine 17, 5 (2000), 58--64.Google ScholarCross Ref
- Yuhui Quan, Huan Teng, Yixin Chen, and Hui Ji. 2021. Watermarking Deep Neural Networks in Image Processing. IEEE Trans. on Neural Networks and Learning Systems 32, 5 (2021), 1852--1865.Google ScholarCross Ref
- Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. 2019. DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks. In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 485--497.Google Scholar
- Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15 (2014), 1929--1958.Google ScholarDigital Library
- Sebastian Szyller, Buse Gul Atli, Samuel Marchal, , and N. Asokan. 2019. DAWN: Dynamic Adversarial Watermarking of Neural Networks. In CoRR abs/1906.00830.Google Scholar
- Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin ichi Satoh. 2017. Embedding Watermarks into Deep Neural Networks. In ICMR. 269--277.Google Scholar
- TianHao Wang and Florian Kerschbaum. 2021. RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks. In WWW.Google Scholar
- Jie Zhang, Dongdong Chen, Jing Liao, Weiming Zhang, Huamin Feng, Gang Hua, and Nenghai Yu. 2021. Deep Model Intellectual Property Protection via Deep Watermarking. IEEE Trans. on Pattern Analysis and Machine Intelligence (2021).Google ScholarDigital Library
- Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting Intellectual Property of Deep Neural Networks with Watermarking. In Asia CCS. 159--172.Google Scholar
Index Terms
- Sparse Trigger Pattern Guided Deep Learning Model Watermarking
Recommendations
A robust watermarking technique in geometric distortion of digital image
ICCSA'03: Proceedings of the 2003 international conference on Computational science and its applications: PartIIGenerally, existing watermarking techniques are fragile in geometric distortion and have difficulty in detecting artificial manipulation of digital data. In this paper, we propose a robust watermarking technique in geometric distortion of digital ...
Adaptive Robust Watermarking Scheme Based on the Features of the Coefficients in NSCT Domain
ICECE '10: Proceedings of the 2010 International Conference on Electrical and Control EngineeringThis paper addresses a new robust watermarking scheme based on Nonsubsampled Contourlet Transform (NSCT) for digital images. The watermark is embedded in the selected important sub-blocks in the low frequency sub-band in the nonsubsampled contourlet ...
Survey of robust and imperceptible watermarking
Robustness, imperceptibility and embedding capacity are the preliminary requirements of any watermarking technique. However, research concluded that these requirements are difficult to achieve at same time. In this paper, we review various recent robust ...
Comments