skip to main content
10.1145/3400302.3415683acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
research-article
Public Access

DP-MAP: towards resistive dot-product engines with improved precision

Published: 17 December 2020 Publication History

Abstract

The natural multiply and accumulate feature of memristor crossbar arrays promises unprecedented processing capabilities to resistive dot-product engines (DPEs), which can accelerate approximate matrix-vector multiplication. To overcome the challenges of low-precision devices and voltage drop over non-zero array parasitics, each matrix element can be represented using two memristors. In this paper, we propose differential pair map (DP-MAP) - the first matrix to memristor conductance mapping algorithm specifically designed for crossbars with a differential pair configuration. In contrast, previous works consider the differential pair configuration as an afterthought, which limits the achievable precision. The specified conductance values are next programmed to the memristor hardware using accurate closed-loop tuning. Analog computation with high precision is attained by judiciously selecting the conductance range and avoiding to explicitly decompose each matrix into a positive and negative component. Short run-time is achieved using a hierarchical optimization algorithm and two speed-up techniques. Compared with earlier studies, the computational accuracy is improved with 3.36X. This translates into signal and image compression with 61% and 94% higher quality, respectively. The simulation time of complex physical systems modeled using partial differential equations (PDEs) is reduced with 5.87X.

References

[1]
M. N. Bojnordi and E. Ipek. 2016. Memristive Boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning (HPCA'16). 1--13.
[2]
P. Chi, S. Li, C. Xu, T. Zhang, J. Zhao, Y. Liu, Y. Wang, and Y. Xie. 2016. PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory (ISCA'16). 27--39.
[3]
Timothy A. Davis and Yifan Hu. 2011. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Softw. 38, 1, Article 1 (2011), 25 pages.
[4]
B. Feinberg, U. K. R. Vengalam, N. Whitehair, S. Wang, and E. Ipek. 2018. Enabling Scientific Computing on Memristive Accelerators (ISCA'18). 367--382.
[5]
Zhezhi He, Jie Lin, Rickard Ewetz, Jiann-Shiun Yuan, and Deliang Fan. 2019. Noise Injection Adaption: End-to-End ReRAM Crossbar Non-ideal Effect Adaption for Neural Network Mapping (DAC'19). 57:1--57:6.
[6]
Miao Hu et al. 2018. Memristor-Based Analog Computation and Neural Network Classification with a DPE. Adv. Materials 30 (2018).
[7]
Miao Hu, Hai Li, Yiran Chen, Qing wu, Garrett Rose, and Richard W Linderman. 2014. Memristor Crossbar-Based Neuromorphic Computing System: A Case Study. NN. and Learning Sys., IEEE Tran. on 25 (2014), 1864--1878.
[8]
M. Hu, J. P. Strachan, Z. Li, E. M. Grafals, N. Davila, C. Graves, S. Lam, N. Ge, J. J. Yang, and R. S. Williams. 2016. Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication (DAC'16). 1--6.
[9]
Shubham Jain, Abhronil Sengupta, Kaushik Roy, and Anand Raghunathan. 2018. RxNN: Framework for evaluating and training Neural Networks on Resistive Crossbars. CoRR abs/1809.00072 (2018). arXiv:1809.00072 http://arxiv.org/abs/1809.00072
[10]
Manuel Le Gallo, Abu Sebastian, Roland Mathis, Matteo Manica, Heiner Giefers, Tomas Tuma, Costas Bekas, Alessandro Curioni, and Evangelos Eleftheriou. 2018. Mixed-precision in-memory computing. Nature Electronics 1 (2018), 246--253.
[11]
Can Li et al. 2017. Analogue signal and image processing with large memristor crossbars. Nature Electronics (2017).
[12]
B. Liu, H. Li, Y. Chen, X. Li, T. Huang, Q. Wu, and M. Barnell. 2014. Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems (ICCAD'14). 63--70.
[13]
B. Liu, Hai Li, Yiran Chen, Xin Li, Qing Wu, and Tingwen Huang. 2015. Vortex: Variation-aware training for memristor X-bar (DAC'15). 1--6.
[14]
D. Martin, C. Fowlkes, D. Tal, and J. Malik. [n. d.]. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In ICCV'01.
[15]
Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins, Gina C. Adam, Konstantin K. Likharev, and Dmitri B. Strukov. 2014. Training and Operation of an Integrated Neuromorphic Network Based on Metal-Oxide Memristors. CoRR abs/1412.0611 (2014). arXiv:1412.0611 http://arxiv.org/abs/1412.0611
[16]
A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian, J. P. Strachan, M. Hu, R. S. Williams, and V. Srikumar. 2016. ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars (ISCA'16). 14--26.
[17]
R. S. Williams. 2017. What's Next? [The end of Moore's law]. Computing in Science Engineering 19, 2 (2017), 7--13.
[18]
Wm. A. Wulf and Sally A. McKee. 1995. Hitting the Memory Wall: Implications of the Obvious. SIGARCH Comput. Archit. News 23, 1 (1995), 20--24.
[19]
Lixue Xia, Peng Gu, Boxun Li, Tianqi Tang, Xiling Yin, Wenqin Huangfu, Shimeng Yu, Yu Cao, Yu Wang, and Huazhong Yang. 2016. Technological Exploration of RRAM Crossbar Array for Matrix-Vector Multiplication. Journal of Computer Science and Technology 31, 1 (2016), 3--19.
[20]
C. Xu, D. Niu, N. Muralimanohar, R. Balasubramonian, T. Zhang, S. Yu, and Y. Xie. 2015. Overcoming the challenges of crossbar resistive memory architectures (HPCA'15). 476--488.
[21]
B. Zhang, N. Uysal, and R. Ewetz. 2020. Computational Restructuring: Rethinking Image Compression using Resistive Crossbar Arrays. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2020), 1--1.
[22]
Baogang Zhang, Necati Uysal, and Rickard Ewetz. 2020. Computational Restructuring: Rethinking Image Processing using Memristor Crossbar Arrays (DATE'20).
[23]
B. Zhang, N. Uysal, D. Fan, and R. Ewetz. 2019. Handling Stuck-at-fault Defects using Matrix Transformation for Robust Inference of DNNs. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2019), 1--1.
[24]
Baogang Zhang, Necati Uysal, Deliang Fan, and Rickard Ewetz. 2020. Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs using Memristors (ASP-DAC'20). 538--543.

Cited By

View all
  • (2024)The Lynchpin of In-Memory Computing: A Benchmarking Framework for Vector-Matrix Multiplication in RRAMs2024 International Conference on Neuromorphic Systems (ICONS)10.1109/ICONS62911.2024.00058(336-342)Online publication date: 30-Jul-2024
  • (2022)Towards resilient analog in-memory deep learning via data layout re-organizationProceedings of the 59th ACM/IEEE Design Automation Conference10.1145/3489517.3530532(859-864)Online publication date: 10-Jul-2022
  • (2021)Accelerating AI Applications using Analog In-Memory ComputingProceedings of the 2021 Great Lakes Symposium on VLSI10.1145/3453688.3461746(379-384)Online publication date: 22-Jun-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICCAD '20: Proceedings of the 39th International Conference on Computer-Aided Design
November 2020
1396 pages
ISBN:9781450380263
DOI:10.1145/3400302
  • General Chair:
  • Yuan Xie
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CAS
  • IEEE CEDA
  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 December 2020

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

ICCAD '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 457 of 1,762 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)57
  • Downloads (Last 6 weeks)11
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The Lynchpin of In-Memory Computing: A Benchmarking Framework for Vector-Matrix Multiplication in RRAMs2024 International Conference on Neuromorphic Systems (ICONS)10.1109/ICONS62911.2024.00058(336-342)Online publication date: 30-Jul-2024
  • (2022)Towards resilient analog in-memory deep learning via data layout re-organizationProceedings of the 59th ACM/IEEE Design Automation Conference10.1145/3489517.3530532(859-864)Online publication date: 10-Jul-2022
  • (2021)Accelerating AI Applications using Analog In-Memory ComputingProceedings of the 2021 Great Lakes Symposium on VLSI10.1145/3453688.3461746(379-384)Online publication date: 22-Jun-2021
  • (2021)Fast and Low-Cost Mitigation of ReRAM Variability for Deep Learning Applications2021 IEEE 39th International Conference on Computer Design (ICCD)10.1109/ICCD53106.2021.00051(269-276)Online publication date: Oct-2021

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media