Abstract:
There is a sheer growth of attention on the issue of deep learning attacks launched by adversaries. Especially with the spread of edge computing devices that cooperate wi...Show MoreMetadata
Abstract:
There is a sheer growth of attention on the issue of deep learning attacks launched by adversaries. Especially with the spread of edge computing devices that cooperate with the central cloud, how to protect the neural network models and private data from being attacked becomes a hot topic. In this paper, we consider two collaborative edge-cloud deep learning scenarios. The first is that deep learning models are trained on resource-rich cloud and deployed on the terminal deep learning accelerators for delay-sensitive tasks. While in the second scenario, edge collected data is offloaded to the cloud for computationally-intensive tasks. In both scenarios, the valuable pre-trained neural network models and the private data are exposed to the risks of being attacked if they are transmitted and processed in an unencrypted way. To tackle this security problem, we present a lightweight protection scheme towards data-oriented attacks and model-oriented attacks. With the use of on-chip memory Physical Unclonable Functions (PUFs) and Processing-In-Memory (PIM), our method limits the model execution only on specific edge devices and also prevents the unauthorized analysis of private data. Experiments on state-of-the-art deep learning networks show that our method secures edge deep learning models and user-data with a negligible performance overhead.
Published in: 2019 IEEE 37th VLSI Test Symposium (VTS)
Date of Conference: 23-25 April 2019
Date Added to IEEE Xplore: 11 July 2019
ISBN Information: