skip to main content
10.1145/3489517.3530440acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

SRA: a secure ReRAM-based DNN accelerator

Published: 23 August 2022 Publication History

Abstract

Deep Neural Network (DNN) accelerators are increasingly developed to pursue high efficiency in DNN computing. However, the IP protection of the DNNs deployed on such accelerators is an important topic that has been less addressed. Although there are previous works that targeted this problem for CMOS-based designs, there is still no solution for ReRAM-based accelerators which pose new security challenges due to their crossbar structure and non-volatility. ReRAM's non-volatility retains data even after the system is powered off, making the stored DNN model vulnerable to attacks by simply reading out the ReRAM content. Because the crossbar structure can only compute on plaintext data, encrypting the ReRAM content is no longer a feasible solution in this scenario.
In this paper, we propose SRA - a secure ReRAM-based DNN accelerator that stores DNN weights on crossbars in an encrypted format while still maintaining ReRAM's in-memory computing capability. The proposed encryption scheme also supports sharing bits among multiple weights, significantly reducing the storage overhead. In addition, SRA uses a novel high-bandwidth SC conversion scheme to protect each layer's intermediate results, which also contain sensitive information of the model. Our experimental results show that SRA can effectively prevent pirating the deployed DNN weights as well as the intermediate results with negligible accuracy loss, and achieves 1.14X performance speedup and 9% energy reduction compared to ISAAC - a non-secure ReRAM-based baseline.

References

[1]
Y. Chen et al., "Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural network," in ISCA, 2016
[2]
Y. Chen et al., "Dadiannao: A machine-learning supercomputer," in MICRO, 2014
[3]
N. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," in ISCA, 2017
[4]
H. Wong et al., "Metal-oxide RRAM," in Proceedings of IEEE, 2012
[5]
A. Vincent et al., "Spin-transfer torque magnetic memory as a stochastic memristive synapse," in ISCAS, 2014
[6]
G. Burr et al., "Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element," in IEEE Transactions on Electron Devices, 2015
[7]
L. Zhao et al., "AEP: An error-bearing neural network accelerator for energy efficiency and model protection," in ICCAD, 2017
[8]
L. Zhao et al., "SCA: a secure CNN accelerator for both training and inference," in DAC, 2020
[9]
W. Li et al., "P3M: a PIM-based neural network model protection scheme for deep learning accelerator," in ASPDAC, 2019
[10]
C. Yeh et al., "Compact one-transistor-N-RRAM array architecture for advanced CMOS technology," in IEEE Journal of Solid-State Circuits, 2015
[11]
A. Ren et al., "SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing," in ASPLOS, 2017
[12]
S. Li et al., "Scope: A stochastic computing engine for dram-based in-situ accelerator," in MICRO, 2018
[13]
M. Bojnordi et al., "Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning," in HPCA, 2016
[14]
L. Zhao et al., "Flipping Bits to Share Crossbars in ReRAM-Based DNN Accelerator," in ICCD, 2021
[15]
A. Shafiee et al., "ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars," in ISCA, 2016
[16]
N. Muralimanohar et al., "CACTI 6.0: A tool to model large caches," Technical Report, 2009.
[17]
X. Dong et al., "Nvsim: A circuit-level performance, energy, and area model for emerging nonvolatile memory," TCAD, 2012.
[18]
M. Saberi et al., "Analysis of power consumption and linearity in capacitive digital-to-analog converters used in successive approximation ADCs," TCAS1, 2011.
[19]
Y. LeCun et al., "The mnist database of handwritten digits," http://yann.lecun.com/exdb/mnist/, 2011.
[20]
Y. Netzer et al., "Reading digits in natural images with unsupervised feature learning," in NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[21]
A. Krizhevsky et al., "Learning multiple layers of features from tiny images," technical report, 2009.
[22]
J. Deng et al., "Imagenet: A large-scale hierarchical image database," in CVPR, 2009.
[23]
Y. LeCun et al., "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, 1998.
[24]
K. He et al., "Deep residual learning for image recognition," in CVPR, 2016.
[25]
S. Xie et al., "Aggregated residual transformations for deep neural networks," in CVPR, 2017.
[26]
C. Szegedy et al., "Going deeper with convolutions," in CVPR, 2015.
[27]
G. Huang et al., "Densely connected convolutional networks," in CVPR, 2017.
[28]
T. Yang et al., "Sparse reram engine: Joint exploration of activation and weight sparsity in compressed neural networks," in ISCA, 2019.

Cited By

View all
  • (2024)CiMSAT: Exploiting SAT Analysis to Attack Compute-in-Memory Architecture DefensesProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690251(3436-3450)Online publication date: 2-Dec-2024
  • (2024)Rapper: A Parameter-Aware Repair-in-Memory Accelerator for Blockchain Storage Platform2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00042(468-482)Online publication date: 2-Mar-2024
  • (2023)WeightLock: A Mixed-Grained Weight Encryption Approach Using Local Decrypting Units for Ciphertext Computing in DNN Accelerators2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)10.1109/AICAS57966.2023.10168612(1-5)Online publication date: 11-Jun-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
July 2022
1462 pages
ISBN:9781450391429
DOI:10.1145/3489517
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ReRAM
  2. accelerator
  3. neural networks
  4. security

Qualifiers

  • Research-article

Conference

DAC '22
Sponsor:
DAC '22: 59th ACM/IEEE Design Automation Conference
July 10 - 14, 2022
California, San Francisco

Acceptance Rates

Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

Upcoming Conference

DAC '25
62nd ACM/IEEE Design Automation Conference
June 22 - 26, 2025
San Francisco , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)54
  • Downloads (Last 6 weeks)4
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)CiMSAT: Exploiting SAT Analysis to Attack Compute-in-Memory Architecture DefensesProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690251(3436-3450)Online publication date: 2-Dec-2024
  • (2024)Rapper: A Parameter-Aware Repair-in-Memory Accelerator for Blockchain Storage Platform2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00042(468-482)Online publication date: 2-Mar-2024
  • (2023)WeightLock: A Mixed-Grained Weight Encryption Approach Using Local Decrypting Units for Ciphertext Computing in DNN Accelerators2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)10.1109/AICAS57966.2023.10168612(1-5)Online publication date: 11-Jun-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media