skip to main content
10.1145/3370748.3406574acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article

RAMANN: in-SRAM differentiable memory computations for memory-augmented neural networks

Published:10 August 2020Publication History

ABSTRACT

Memory-Augmented Neural Networks (MANNs) have been shown to outperform Recurrent Neural Networks (RNNs) in terms of long-term dependencies. Since MANNs are equipped with an external memory, they can store and retrieve more data through longer periods of time. A MANN generally consists of a network controller and an external memory. Unlike conventional memory having read/write operations to specific addresses, a differentiable memory has soft read and write operations involving all the data stored in the memory. Such soft read and write operations present new computational challenges for hardware implementation of MANNs. In this work, we present a novel in-memory computing primitive to accelerate the differentiable memory operations of MANNs in SRAMs. We propose a 9T SRAM macro capable of performing both Hamming similarity and dot products (crucial for soft read/write and addressing mechanisms in MANNs). Regarding Hamming similarity, we operate the 9T cell in analog Content-Addressable Memory (CAM) mode by applying the key at the bitlines (RBLs/RBLBs) in each column, and reading out the analog output at the sourceline (SL). To perform dot product operation, the input data is applied at the wordlines, and the current passing through RBLs represents the dot product between the input data and the stored bits. The proposed SRAM array performs computations that reliably match the operations required for a differentiable memory, thereby leading to energy-efficient on-chip acceleration of MANNs. Compared to standard GPU systems, the proposed scheme achieves 43x and 85x performance and energy improvements respectively, for computing the differentiable memory operations.

Skip Supplemental Material Section

Supplemental Material

3370748.3406574.mp4

mp4

20.5 MB

References

  1. S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735--1780, 1997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. S. F.A. Gers and F. Cummins, "Learning to forget: continual prediction with lstm," IET Conference Proceedings, pp. 850--855(5), January 1999.Google ScholarGoogle Scholar
  3. H. Siegelmann and E. Sontag, "On the computational power of neural nets," Journal of Computer and System Sciences, vol. 50, no. 1, pp. 132 -- 150, 1995.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. Csordás and J. Schmidhuber, "Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control," ArXiv, pp. 1--13, apr 2019.Google ScholarGoogle Scholar
  5. J. Weston, S. Chopra, and A. Bordes, "Memory networks," CoRR, vol. abs/1410.3916, 2015.Google ScholarGoogle Scholar
  6. A. Graves et al., "Neural turing machines," CoRR, vol. abs/1410.5401, 2014.Google ScholarGoogle Scholar
  7. S. Sukhbaatar et al., "End-to-end memory networks," in NIPS, 2015.Google ScholarGoogle Scholar
  8. A. Graves et al., "Hybrid computing using a neural network with dynamic external memory," Nature, vol. 538, no. 7626, pp. 471--476, oct 2016.Google ScholarGoogle ScholarCross RefCross Ref
  9. A. Kumar et al., "Ask me anything: Dynamic memory networks for natural language processing," in Proceedings of The 33rd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. F. Balcan and K. Q. Weinberger, Eds., vol. 48, 2016, pp. 1378--1387.Google ScholarGoogle Scholar
  10. J. Weston et al., "Towards ai-complete question answering: A set of prerequisite toy tasks," 2015.Google ScholarGoogle Scholar
  11. A. Shafiee et al., "Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars," in Proceedings of the 43rd International Symposium on Computer Architecture, ser. ISCA '16. Piscataway, NJ, USA: IEEE Press, 2016, pp. 14--26.Google ScholarGoogle Scholar
  12. Y. Chen et al., "Eyeriss.: An energy-efficient reconfigurable accelerator for deep convolutional neural networks," IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127--138, Jan 2017.Google ScholarGoogle ScholarCross RefCross Ref
  13. A. Ranjan et al., "X-mann: A crossbar based architecture for memory augmented neural networks," in Proceedings of the 56th Annual Design Automation Conference 2019, ser. DAC '19. New York, NY, USA: Association for Computing Machinery, 2019.Google ScholarGoogle Scholar
  14. S. Park et al., "Energy-efficient inference accelerator for memory-augmented neural networks on an fpga," 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1587--1590, 2018.Google ScholarGoogle Scholar
  15. H. Jang et al., "Mnnfast: A fast and scalable system architecture for memory-augmented neural networks," in Proceedings of the 46th International Symposium on Computer Architecture, ser. ISCA '19. New York, NY, USA: Association for Computing Machinery, 2019, p. 250--263.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. R. Stevens et al., "Manna: An accelerator for memory-augmented neural networks," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO '52. New York, NY, USA: Association for Computing Machinery, 2019, p. 794--806.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Zhiyu Liu and V. Kursun, "Characterization of a Novel Nine-Transistor SRAM Cell," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 16, no. 4, pp. 488--492, apr 2008.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Q. Dong et al., "A 4 + 2t sram for searching and in-memory computing with 0.3-v vddmin," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 1006--1015, April 2018.Google ScholarGoogle ScholarCross RefCross Ref
  19. A. Jaiswal et al., "8t sram cell as a multi-bit dot product engine for beyond von-neumann computing," arXiv preprint arXiv:1802.08601, 2018.Google ScholarGoogle Scholar
  20. A. Agrawal et al., "Xcel-ram: Accelerating binary neural networks in high-throughput sram compute arrays," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 66, no. 8, pp. 3064--3076, Aug 2019.Google ScholarGoogle ScholarCross RefCross Ref
  21. A. Biswas and A. P. Chandrakasan, "Conv-ram: An energy-efficient sram with embedded convolution computation for low-power cnn-based machine learning applications," in Solid-State Circuits Conference-(ISSCC), 2018 IEEE International. IEEE, 2018, pp. 488--490.Google ScholarGoogle Scholar
  22. S. Park et al., "Quantized memory-augmented neural networks," in AAAI Conference on Artificial Intelligence, 2018.Google ScholarGoogle Scholar
  23. K. D. Choo et al., "27.3 area-efficient 1gs/s 6b sar adc with charge-injection-cell-based dac," in 2016 IEEE International Solid-State Circuits Conference (ISSCC), Jan 2016, pp. 460--461.Google ScholarGoogle Scholar
  24. A. Ankit et al., "Puma: A programmable ultra-efficient memristor-based accelerator for machine learning inference," in 2019 International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2019, p. 715--731.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ISLPED '20: Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design
    August 2020
    263 pages
    ISBN:9781450370530
    DOI:10.1145/3370748

    Copyright © 2020 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 10 August 2020

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate398of1,159submissions,34%

    Upcoming Conference

    ISLPED '24

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader