skip to main content
10.1145/3287624.3287695acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article

P3M: a PIM-based neural network model protection scheme for deep learning accelerator

Authors Info & Claims
Published:21 January 2019Publication History

ABSTRACT

This work is oriented at the edge computing scenario that terminal deep learning accelerators use pre-trained neural network models distributed from third-party providers (e.g. from data center clouds) to process the private data instead of sending it to the cloud. In this scenario, the network model is exposed to the risk of being attacked in the unverified devices if the parameters and hyper-parameters are transmitted and processed in an unencrypted way. Our work tackles this security problem by using on-chip memory Physical Unclonable Functions (PUFs) and Processing-In-Memory (PIM). We allow the model execution only on authorized devices and protect the model from white-box attacks, black-box attacks and model tampering attacks. The proposed PUFs-and-PIM based Protection method for neural Models (P3M), can utilize unstable PUFs to protect the neural models in edge deep learning accelerators with negligible performance overhead. The experimental results show considerable performance improvement over two state-of-the-art solutions we evaluated.

References

  1. Ross J. Anderson. 2008. Security Engineering: A Guide to Building Dependable Distributed Systems (2 ed.). Wiley Publishing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Nicholas Carlini and David Wagner. 2017. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AlSec '17). ACM, 3--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao: A Small-footprint High-throughput Accelerator for Ubiquitous Machine-learning. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '14). ACM, 269--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam. 2014. DaDianNao: A Machine-Learning Supercomputer. In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. 609--622. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. Computer Science (2014).Google ScholarGoogle Scholar
  6. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. CoRR abs/1708.06733 (2017). arXiv:1708.06733 http://arxiv.org/abs/1708.06733Google ScholarGoogle Scholar
  7. Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In 2017 ACM SIGSAC. ACM, 135--147. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Nicolas Papernot et al. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS '17). ACM, New York, NY, USA, 506--519. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582--597.Google ScholarGoogle ScholarCross RefCross Ref
  10. Congzheng Song et al. 2017. Machine Learning Models That Remember Too Much (CCS '17). ACM, 587--601. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. L. Song et al. 2016. C-Brain: A deep learning accelerator that tames the diversity of CNNs through adaptive data-level parallelization. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC). 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013). arXiv:1312.6199 http://arxiv.org/abs/1312.6199Google ScholarGoogle Scholar
  13. F. Tehranipoor et al. 2017. Investigation of DRAM PUFs reliability under device accelerated aging effects. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS). 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  14. Fatemeh Tehranipoor, Nima Karimian, Wei Yan, and John A Chandy. 2017. DRAM-Based Intrinsic Physically Unclonable Functions for System-Level Security and Authentication. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 25, 3 (March 2017), 1085--1097.Google ScholarGoogle ScholarCross RefCross Ref
  15. Florian Tramèr et al. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.Google ScholarGoogle Scholar
  16. Fengbin Tu et al. 2018. RANA: Towards Efficient Neural Acceleration with Refresh-Optimized Embedded DRAM. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). 340--352. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Wang, J. Xu, Y. Han, H. Li, and X. Li. 2016. DeepBurning: Automatic generation of FPGA-based learning accelerators for the Neural Network family. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC). 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. K. Zou, Y. Wang, H. Li, and X. Li. 2018. XORiM: A case of in-memory bit-comparator implementation and its performance implications. In 2018 ASP-DAC. 349--354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. P3M: a PIM-based neural network model protection scheme for deep learning accelerator

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ASPDAC '19: Proceedings of the 24th Asia and South Pacific Design Automation Conference
      January 2019
      794 pages
      ISBN:9781450360074
      DOI:10.1145/3287624

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 January 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate466of1,454submissions,32%

      Upcoming Conference

      ASPDAC '25

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader