ABSTRACT
This work is oriented at the edge computing scenario that terminal deep learning accelerators use pre-trained neural network models distributed from third-party providers (e.g. from data center clouds) to process the private data instead of sending it to the cloud. In this scenario, the network model is exposed to the risk of being attacked in the unverified devices if the parameters and hyper-parameters are transmitted and processed in an unencrypted way. Our work tackles this security problem by using on-chip memory Physical Unclonable Functions (PUFs) and Processing-In-Memory (PIM). We allow the model execution only on authorized devices and protect the model from white-box attacks, black-box attacks and model tampering attacks. The proposed PUFs-and-PIM based Protection method for neural Models (P3M), can utilize unstable PUFs to protect the neural models in edge deep learning accelerators with negligible performance overhead. The experimental results show considerable performance improvement over two state-of-the-art solutions we evaluated.
- Ross J. Anderson. 2008. Security Engineering: A Guide to Building Dependable Distributed Systems (2 ed.). Wiley Publishing. Google ScholarDigital Library
- Nicholas Carlini and David Wagner. 2017. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AlSec '17). ACM, 3--14. Google ScholarDigital Library
- Tianshi Chen, Zidong Du, Ninghui Sun, Jia Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao: A Small-footprint High-throughput Accelerator for Ubiquitous Machine-learning. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '14). ACM, 269--284. Google ScholarDigital Library
- Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam. 2014. DaDianNao: A Machine-Learning Supercomputer. In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. 609--622. Google ScholarDigital Library
- Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. Computer Science (2014).Google Scholar
- Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. CoRR abs/1708.06733 (2017). arXiv:1708.06733 http://arxiv.org/abs/1708.06733Google Scholar
- Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In 2017 ACM SIGSAC. ACM, 135--147. Google ScholarDigital Library
- Nicolas Papernot et al. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS '17). ACM, New York, NY, USA, 506--519. Google ScholarDigital Library
- Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582--597.Google ScholarCross Ref
- Congzheng Song et al. 2017. Machine Learning Models That Remember Too Much (CCS '17). ACM, 587--601. Google ScholarDigital Library
- L. Song et al. 2016. C-Brain: A deep learning accelerator that tames the diversity of CNNs through adaptive data-level parallelization. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC). 1--6. Google ScholarDigital Library
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013). arXiv:1312.6199 http://arxiv.org/abs/1312.6199Google Scholar
- F. Tehranipoor et al. 2017. Investigation of DRAM PUFs reliability under device accelerated aging effects. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS). 1--4.Google ScholarCross Ref
- Fatemeh Tehranipoor, Nima Karimian, Wei Yan, and John A Chandy. 2017. DRAM-Based Intrinsic Physically Unclonable Functions for System-Level Security and Authentication. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 25, 3 (March 2017), 1085--1097.Google ScholarCross Ref
- Florian Tramèr et al. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.Google Scholar
- Fengbin Tu et al. 2018. RANA: Towards Efficient Neural Acceleration with Refresh-Optimized Embedded DRAM. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). 340--352. Google ScholarDigital Library
- Y. Wang, J. Xu, Y. Han, H. Li, and X. Li. 2016. DeepBurning: Automatic generation of FPGA-based learning accelerators for the Neural Network family. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC). 1--6. Google ScholarDigital Library
- K. Zou, Y. Wang, H. Li, and X. Li. 2018. XORiM: A case of in-memory bit-comparator implementation and its performance implications. In 2018 ASP-DAC. 349--354. Google ScholarDigital Library
P3M: a PIM-based neural network model protection scheme for deep learning accelerator
Recommendations
Distributed denial of service attacks and its defenses in IoT: a survey
AbstractA distributed denial of service (DDoS) attack is an attempt to partially or completely shut down the targeted server with a flood of internet traffic. The primary aim of this attack is to disrupt regular traffic flow to the victim’s server or ...
Towards Understanding and Enhancing Robustness of Deep Learning Models against Malicious Unlearning Attacks
KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data MiningGiven the availability of abundant data, deep learning models have been advanced and become ubiquitous in the past decade. In practice, due to many different reasons (e.g., privacy, usability, and fidelity), individuals also want the trained deep models ...
A Problem-tailored Adversarial Deep Neural Network-Based Attack Model for Feed-Forward Physical Unclonable Functions
With the exceeding advancement in technology, the sophistication of attacks is considerably increasing. Standard security methods fall short of achieving the security essentials of IoT against physical attacks due to the nature of IoTs being resource-...
Comments