skip to main content
10.1145/3489517.3530574acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

TAIM: ternary activation in-memory computing hardware with 6T SRAM array

Published: 23 August 2022 Publication History

Abstract

Recently, various in-memory computing accelerators for low precision neural networks have been proposed. While in-memory Binary Neural Network (BNN) accelerators achieved significant energy efficiency, BNNs show severe accuracy degradation compared to their full precision counterpart models. To mitigate the problem, we propose TAIM, an in-memory computing hardware that can support ternary activation with negligible hardware overhead. In TAIM, a 6T SRAM cell can compute the multiplication between ternary activation and binary weight. Since the 6T SRAM cell consumes no energy when the input activation is 0, the proposed TAIM hardware can achieve even higher energy efficiency compared to BNN case by exploiting input 0's. We fabricated the proposed TAIM hardware in 28nm CMOS process and evaluated the energy efficiency on various image classification benchmarks. The experimental results show that the proposed TAIM hardware can achieve ~ 3.61× higher energy efficiency on average compared to previous designs which support ternary activation.

References

[1]
Kaiming He et al. 2016. Deep residual learning for image recognition. In CVPR. 770--778.
[2]
Itay Hubara et al. 2016. Binarized neural networks. In Advances in neural information processing systems. 4107--4115.
[3]
Shubham Jain et al. 2020. TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks. IEEE Transactions on Very Large Scale Integration (VLSI) Systems (2020).
[4]
Zhewei Jiang et al. 2018. XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks. In 2018 IEEE Symposium on VLSI Technology. 173--174.
[5]
Hyungjun Kim, Yulhwa Kim, and Jae-Joon Kim. 2019. In-memory batch-normalization for resistive memory based binary neural network hardware. In Proceedings of the 24th Asia and South Pacific Design Automation Conference.
[6]
Jinseok Kim et al. 2019. Area-efficient and variation-tolerant in-memory BNN computing using 6T SRAM array. In 2019 Symposium on VLSI Circuits. IEEE, C118--C119.
[7]
Alex Krizhevsky et al. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[8]
Rui Liu et al. 2018. Parallelizing SRAM arrays with customized bit-cell for binary neural networks. In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC). IEEE.
[9]
Mohammad Rastegari et al. 2016. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision. Springer.
[10]
Xin Si et al. 2019. 24.5 A twin-8T SRAM computation-in-memory macro for multiple-bit CNN-based machine learning. In 2019 IEEE International Solid-State Circuits Conference-(ISSCC). IEEE, 396--398.
[11]
Xin Si et al. 2019. A dual-split 6T SRAM-based computing-in-memory unit-macro with fully parallel product-sum operation for binarized DNN edge processors. IEEE Transactions on Circuits and Systems I: Regular Papers 66, 11 (2019), 4172--4185.
[12]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014).

Cited By

View all
  • (2024)TA-Quatro: Soft Error-Resilient and Power-Efficient SRAM Cell for ADC-Less Binary Weight and Ternary Activation In-Memory ComputingElectronics10.3390/electronics1315290413:15(2904)Online publication date: 23-Jul-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
July 2022
1462 pages
ISBN:9781450391429
DOI:10.1145/3489517
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep neural network
  2. in-memory computing
  3. ternary activation

Qualifiers

  • Research-article

Funding Sources

  • Samsung Research Funding Center
  • Korea government (MSIT)

Conference

DAC '22
Sponsor:
DAC '22: 59th ACM/IEEE Design Automation Conference
July 10 - 14, 2022
California, San Francisco

Acceptance Rates

Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

Upcoming Conference

DAC '25
62nd ACM/IEEE Design Automation Conference
June 22 - 26, 2025
San Francisco , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)95
  • Downloads (Last 6 weeks)4
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)TA-Quatro: Soft Error-Resilient and Power-Efficient SRAM Cell for ADC-Less Binary Weight and Ternary Activation In-Memory ComputingElectronics10.3390/electronics1315290413:15(2904)Online publication date: 23-Jul-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media