skip to main content
10.1145/3489517.3530440acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

SRA: a secure ReRAM-based DNN accelerator

Published:23 August 2022Publication History

ABSTRACT

Deep Neural Network (DNN) accelerators are increasingly developed to pursue high efficiency in DNN computing. However, the IP protection of the DNNs deployed on such accelerators is an important topic that has been less addressed. Although there are previous works that targeted this problem for CMOS-based designs, there is still no solution for ReRAM-based accelerators which pose new security challenges due to their crossbar structure and non-volatility. ReRAM's non-volatility retains data even after the system is powered off, making the stored DNN model vulnerable to attacks by simply reading out the ReRAM content. Because the crossbar structure can only compute on plaintext data, encrypting the ReRAM content is no longer a feasible solution in this scenario.

In this paper, we propose SRA - a secure ReRAM-based DNN accelerator that stores DNN weights on crossbars in an encrypted format while still maintaining ReRAM's in-memory computing capability. The proposed encryption scheme also supports sharing bits among multiple weights, significantly reducing the storage overhead. In addition, SRA uses a novel high-bandwidth SC conversion scheme to protect each layer's intermediate results, which also contain sensitive information of the model. Our experimental results show that SRA can effectively prevent pirating the deployed DNN weights as well as the intermediate results with negligible accuracy loss, and achieves 1.14X performance speedup and 9% energy reduction compared to ISAAC - a non-secure ReRAM-based baseline.

References

  1. Y. Chen et al., "Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural network," in ISCA, 2016Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Y. Chen et al., "Dadiannao: A machine-learning supercomputer," in MICRO, 2014Google ScholarGoogle Scholar
  3. N. Jouppi et al., "In-datacenter performance analysis of a tensor processing unit," in ISCA, 2017Google ScholarGoogle Scholar
  4. H. Wong et al., "Metal-oxide RRAM," in Proceedings of IEEE, 2012Google ScholarGoogle Scholar
  5. A. Vincent et al., "Spin-transfer torque magnetic memory as a stochastic memristive synapse," in ISCAS, 2014Google ScholarGoogle Scholar
  6. G. Burr et al., "Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element," in IEEE Transactions on Electron Devices, 2015Google ScholarGoogle Scholar
  7. L. Zhao et al., "AEP: An error-bearing neural network accelerator for energy efficiency and model protection," in ICCAD, 2017Google ScholarGoogle Scholar
  8. L. Zhao et al., "SCA: a secure CNN accelerator for both training and inference," in DAC, 2020Google ScholarGoogle Scholar
  9. W. Li et al., "P3M: a PIM-based neural network model protection scheme for deep learning accelerator," in ASPDAC, 2019Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Yeh et al., "Compact one-transistor-N-RRAM array architecture for advanced CMOS technology," in IEEE Journal of Solid-State Circuits, 2015Google ScholarGoogle Scholar
  11. A. Ren et al., "SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing," in ASPLOS, 2017Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Li et al., "Scope: A stochastic computing engine for dram-based in-situ accelerator," in MICRO, 2018Google ScholarGoogle Scholar
  13. M. Bojnordi et al., "Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning," in HPCA, 2016Google ScholarGoogle Scholar
  14. L. Zhao et al., "Flipping Bits to Share Crossbars in ReRAM-Based DNN Accelerator," in ICCD, 2021Google ScholarGoogle Scholar
  15. A. Shafiee et al., "ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars," in ISCA, 2016Google ScholarGoogle Scholar
  16. N. Muralimanohar et al., "CACTI 6.0: A tool to model large caches," Technical Report, 2009.Google ScholarGoogle Scholar
  17. X. Dong et al., "Nvsim: A circuit-level performance, energy, and area model for emerging nonvolatile memory," TCAD, 2012.Google ScholarGoogle Scholar
  18. M. Saberi et al., "Analysis of power consumption and linearity in capacitive digital-to-analog converters used in successive approximation ADCs," TCAS1, 2011.Google ScholarGoogle Scholar
  19. Y. LeCun et al., "The mnist database of handwritten digits," http://yann.lecun.com/exdb/mnist/, 2011.Google ScholarGoogle Scholar
  20. Y. Netzer et al., "Reading digits in natural images with unsupervised feature learning," in NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.Google ScholarGoogle Scholar
  21. A. Krizhevsky et al., "Learning multiple layers of features from tiny images," technical report, 2009.Google ScholarGoogle Scholar
  22. J. Deng et al., "Imagenet: A large-scale hierarchical image database," in CVPR, 2009.Google ScholarGoogle Scholar
  23. Y. LeCun et al., "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, 1998.Google ScholarGoogle Scholar
  24. K. He et al., "Deep residual learning for image recognition," in CVPR, 2016.Google ScholarGoogle Scholar
  25. S. Xie et al., "Aggregated residual transformations for deep neural networks," in CVPR, 2017.Google ScholarGoogle Scholar
  26. C. Szegedy et al., "Going deeper with convolutions," in CVPR, 2015.Google ScholarGoogle Scholar
  27. G. Huang et al., "Densely connected convolutional networks," in CVPR, 2017.Google ScholarGoogle Scholar
  28. T. Yang et al., "Sparse reram engine: Joint exploration of activation and weight sparsity in compressed neural networks," in ISCA, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
    July 2022
    1462 pages
    ISBN:9781450391429
    DOI:10.1145/3489517

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 23 August 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,770of5,499submissions,32%

    Upcoming Conference

    DAC '24
    61st ACM/IEEE Design Automation Conference
    June 23 - 27, 2024
    San Francisco , CA , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader