skip to main content
10.1145/3489517.3530558acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

Write or not: programming scheme optimization for RRAM-based neuromorphic computing

Published:23 August 2022Publication History

ABSTRACT

One main fault-tolerant method for a neural network accelerator based on resistive random access memory crossbars is the programming-based method, which is also known as write-and-verify (W-V). In the basic W-V scheme, all devices in crossbars are programmed repeatedly until they are close enough to their targets, which costs huge overhead. To reduce the cost, we optimize the W-V scheme by proposing a probabilistic termination criterion on a single device and a systematic optimization method on multiple devices. Furthermore, we propose a joint algorithm that assists the novel W-V scheme by incremental retraining, which further reduces the W-V cost. Compared to the basic W-V scheme, our proposed method improves the accuracy by 0.23% for ResNet18 on CIFAR10 with only 9.7% W-V cost under variation with σ = 1.2.

References

  1. S. Yu. Resistive Random Access Memory (RRAM). Morgan & Claypool, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  2. A. Shafiee et al. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In ISCA, pages 14--26, 2016.Google ScholarGoogle Scholar
  3. P. Chi et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In ISCA, pages 27--39, 2016.Google ScholarGoogle Scholar
  4. A. Grossi et al. Fundamental variability limits of filament-based RRAM. In IEDM, pages 4.7.1--4.7.4, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  5. L. Chen et al. Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar. In DATE, pages 19--24, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Charan et al. Accurate inference with inaccurate rram devices: Statistical data, model transfer, and on-line adaptation. In DAC, pages 1--6, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  7. Z. Meng et al. Digital offset for RRAM-based neuromorphic computing: A novel solution to conquer cycle-to-cycle variation. In DATE, pages 1078--1083, 2021.Google ScholarGoogle Scholar
  8. B. Liu et al. Vortex: Variation-aware training for memristor X-bar. In DAC, pages 1--6, 2015.Google ScholarGoogle Scholar
  9. Y. Zhu et al. Statistical training for neuromorphic computing using memristor-based crossbars considering process variations and noise. In DATE, pages 1590--1593, 2020.Google ScholarGoogle Scholar
  10. Y. Long et al. Design of reliable DNN accelerator with un-reliable ReRAM. In DATE, pages 1769--1774, 2019.Google ScholarGoogle Scholar
  11. Y. Geng et al. An on-chip layer-wise training method for RRAM based computing-in-memory chips. In DATE, pages 248--251, 2021.Google ScholarGoogle Scholar
  12. C. Ma et al. Go unary: A novel synapse coding and mapping scheme for reliable ReRAM-based neuromorphic computing. In DATE, pages 1432--1437, 2020.Google ScholarGoogle Scholar
  13. K. Higuchi et al. 50nm hfo2 ReRAM with 50-times endurance enhancement by set/reset turnback pulse & verify scheme. In SSDM, pages 1011--1012, 09 2011.Google ScholarGoogle Scholar
  14. J. K. Yoon et al. A 40nm 64Kb 56.67 TOPS/W read-disturb-tolerant compute-in-memory/digital RRAM macro with active-feedback-based read and in-situ write verification. In ISSCC, pages 404--406, 2021.Google ScholarGoogle Scholar
  15. P. Yao et al. Fully hardware-implemented memristor convolutional neural network. Nature, 577(7792):641--646, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  16. S. Yu et al. Investigating the switching dynamics and multilevel capability of bipolar metal oxide resistive switching memory. APL, 98(10):103514, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  17. G. L. Zhang et al. An efficient programming framework for memristor-based neuromorphic computing. In DATE, pages 1068--1073, 2021.Google ScholarGoogle Scholar
  18. Y. Cai et al. Long live TIME: Improving lifetime for training-in-memory engines by structured gradient sparsification. In DAC, pages 1--6, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
    July 2022
    1462 pages
    ISBN:9781450391429
    DOI:10.1145/3489517

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 23 August 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,770of5,499submissions,32%

    Upcoming Conference

    DAC '24
    61st ACM/IEEE Design Automation Conference
    June 23 - 27, 2024
    San Francisco , CA , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader