skip to main content
10.1145/3534678.3539257acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting

Published:14 August 2022Publication History

ABSTRACT

Protecting the intellectual property (IP) of deep neural networks (DNN) becomes an urgent concern for IT corporations. For model piracy forensics, previous model fingerprinting schemes are commonly based on adversarial examples constructed for the owner's model as the fingerprint, and verify whether a suspect model is indeed pirated from the original model by matching the behavioral pattern on the fingerprint examples between one another. However, these methods heavily rely on the characteristics of classification tasks which inhibits their application to more general scenarios. To address this issue, we present MetaV, the first task-agnostic model fingerprinting framework which enables fingerprinting on a much wider range of DNNs independent from the downstream learning task, and exhibits strong robustness against a variety of ownership obfuscation techniques. Specifically, we generalize previous schemes into two critical design components in MetaV: the adaptive fingerprint and the meta-verifier, which are jointly optimized such that the meta-verifier learns to determine whether a suspect model is stolen based on the concatenated outputs of the suspect model on the adaptive fingerprint. As a key of being task-agnostic, the full process makes no assumption on the model internals in the ensemble only if they have the same input and output dimensions. Spanning classification, regression and generative modeling, extensive experimental results validate the substantially improved performance of MetaV over the state-of-the-art fingerprinting schemes and demonstrate the enhanced generality of MetaV for providing task-agnostic fingerprinting. For example, on fingerprinting ResNet-18 trained for skin cancer diagnosis, MetaV achieves simultaneously 100% true positives and 100% true negatives on a diverse test set of 70 suspect models, achieving an about 220% relative improvement in ARUC over the optimal baseline.

Skip Supplemental Material Section

Supplemental Material

kdd22-meta.mp4

mp4

204.6 MB

References

  1. [n. d.]. PyTorch Hub. https://pytorch.org/hub/. Accessed: 2021-02-01.Google ScholarGoogle Scholar
  2. Y. Adi, Carsten Baum, et al. 2018. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. In USENIX Security Symposium.Google ScholarGoogle Scholar
  3. Angela Aguinaldo, Ping-Yeh Chiang, et al. 2019. Compressing GANs using Knowledge Distillation. ArXiv abs/1902.00159 (2019).Google ScholarGoogle Scholar
  4. Franziska Boenisch. 2020. A Survey on Model Watermarking Neural Networks. ArXiv (2020).Google ScholarGoogle Scholar
  5. Xiaoyu Cao, J. Jia, et al. 2021. IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. AsiaCCS (2021).Google ScholarGoogle Scholar
  6. Yulong Cao, Chaowei Xiao, et al. 2019. Adversarial Sensor Attack on LiDARbased Perception in Autonomous Driving. CCS (2019).Google ScholarGoogle Scholar
  7. Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy (2017).Google ScholarGoogle Scholar
  8. A. Choroma'ska, Mikael Henaff, et al. 2015. The Loss Surfaces of Multilayer Networks. In AISTATS.Google ScholarGoogle Scholar
  9. Kevin Clark, Minh-Thang Luong, et al. 2019. BAM! Born-Again Multi-Task Networks for Natural Language Understanding. In ACL.Google ScholarGoogle Scholar
  10. J. Devlin, Ming-Wei Chang, et al. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT.Google ScholarGoogle Scholar
  11. Andre Esteva, B. Kuprel, et al. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature (2017).Google ScholarGoogle Scholar
  12. Jianping Gou, B. Yu, et al. 2021. Knowledge Distillation: A Survey. Int. J. Comput. Vis. (2021).Google ScholarGoogle Scholar
  13. Song Han, Jeff Pool, et al. 2015. Learning both Weights and Connections for Efficient Neural Network. ArXiv (2015).Google ScholarGoogle Scholar
  14. Kaiming He, X. Zhang, et al. 2016. Deep Residual Learning for Image Recognition. CVPR (2016), 770--778.Google ScholarGoogle Scholar
  15. J. B. Heaton, Nicholas G. Polson, et al. 2016. Deep Learning for Finance: Deep Portfolios. Econometric Modeling: Capital Markets - Portfolio Theory eJournal (2016).Google ScholarGoogle Scholar
  16. Geoffrey E. Hinton et al. 2015. Distilling the Knowledge in a Neural Network. ArXiv (2015).Google ScholarGoogle Scholar
  17. Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. 2017. Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), 2261--2269.Google ScholarGoogle Scholar
  18. Forrest N. Iandola, M. Moskewicz, Khalid Ashraf, Song Han, W. Dally, and K. Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size. ArXiv abs/1602.07360 (2016).Google ScholarGoogle Scholar
  19. Hoyong Jeong, Dohyun Ryu, et al. 2021. Neural Network Stealing via Meltdown. ICOIN (2021), 36--38.Google ScholarGoogle Scholar
  20. Mika Juuti, Sebastian Szyller, et al. 2019. PRADA: Protecting Against DNN Model Stealing Attacks. EuroS&P (2019).Google ScholarGoogle Scholar
  21. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2015).Google ScholarGoogle Scholar
  22. A. Krizhevsky. 2014. One weird trick for parallelizing convolutional neural networks. ArXiv abs/1404.5997 (2014).Google ScholarGoogle Scholar
  23. Hao Li, Asim Kadav, et al. 2017. Pruning Filters for Efficient ConvNets. ArXiv (2017).Google ScholarGoogle Scholar
  24. Yuanchun Li, Ziqi Zhang, et al. 2021. ModelDiff: testing-based DNN similarity comparison for model reuse detection. ISSTA (2021).Google ScholarGoogle Scholar
  25. Nils Lukas, Yuxuan Zhang, et al. 2021. Deep Neural Network Fingerprinting by Conferrable Adversarial Examples. ICLR (2021).Google ScholarGoogle Scholar
  26. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, et al. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. CVPR (2016).Google ScholarGoogle Scholar
  27. Adam Paszke, S. Gross, et al. 2019. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In NeurIPS.Google ScholarGoogle Scholar
  28. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR abs/1511.06434 (2016).Google ScholarGoogle Scholar
  29. Esteban Real, A. Aggarwal, et al. 2019. Regularized Evolution for Image Classifier Architecture Search. In AAAI.Google ScholarGoogle Scholar
  30. F. Regazzoni, P. Palmieri, et al. 2021. Protecting artificial intelligence IPs: a survey of watermarking and fingerprinting for machine learning. CAAI Transactions on Intelligence Technology (2021).Google ScholarGoogle Scholar
  31. B. Rouhani, Huili Chen, et al. 2018. DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models. ArXiv (2018).Google ScholarGoogle Scholar
  32. K. Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. ArXiv (2015).Google ScholarGoogle Scholar
  33. Christian Szegedy, W. Zaremba, Ilya Sutskever, Joan Bruna, et al. 2014. Intriguing properties of neural networks. ArXiv (2014).Google ScholarGoogle Scholar
  34. Florian Tramèr, F. Zhang, et al. 2016. Stealing Machine Learning Models via Prediction APIs. In USENIX Security.Google ScholarGoogle Scholar
  35. G. Truda and P. Marais. 2019. Warfarin dose estimation on multiple datasets with automated hyperparameter optimisation and a novel software framework. ArXiv (2019).Google ScholarGoogle Scholar
  36. Y. Uchida, Yuki Nagai, et al. 2017. Embedding Watermarks into Deep Neural Networks. ICMR (2017).Google ScholarGoogle Scholar
  37. Si Wang and Chip-Hong Chang. 2021. Fingerprinting Deep Neural Networks - a DeepFool Approach. ISCAS (2021).Google ScholarGoogle Scholar
  38. M. Whirl-Carrillo, E. McDonagh, et al. 2012. Pharmacogenomics Knowledge for Personalized Medicine. Clinical Pharmacology & Therapeutics (2012).Google ScholarGoogle Scholar
  39. H. Xiao, K. Rasul, et al. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. ArXiv (2017).Google ScholarGoogle Scholar
  40. Xiangrui Xu, Y. Li, et al. 2020. "Identity Bracelets" for Deep Neural Networks. IEEE Access (2020).Google ScholarGoogle Scholar
  41. Mengjia Yan, Christopher W. Fletcher, and J. Torrellas. 2020. Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures. USENIX Security (2020).Google ScholarGoogle Scholar
  42. Jiancheng Yang, R. Shi, et al. 2020. MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis. ArXiv (2020).Google ScholarGoogle Scholar
  43. Honggang Yu, Kaichen Yang, et al. 2020. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples. In NDSS.Google ScholarGoogle Scholar
  44. Jialong Zhang, Zhongshu Gu, et al. 2018. Protecting intellectual property of deep neural networks with watermarking. AsiaCCS (2018).Google ScholarGoogle Scholar
  45. Jingjing Zhao, Qingyue Hu, et al. 2020. AFA: Adversarial fingerprinting authentication for deep neural networks. Comput. Commun. (2020).Google ScholarGoogle Scholar
  46. Haoyi Zhou, Shanghang Zhang, et al. 2021. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In AAAI.Google ScholarGoogle Scholar

Index Terms

  1. MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
        August 2022
        5033 pages
        ISBN:9781450393850
        DOI:10.1145/3534678

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 August 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,133of8,635submissions,13%

        Upcoming Conference

        KDD '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader