ABSTRACT
Memory-Augmented Neural Networks (MANNs) have been shown to outperform Recurrent Neural Networks (RNNs) in terms of long-term dependencies. Since MANNs are equipped with an external memory, they can store and retrieve more data through longer periods of time. A MANN generally consists of a network controller and an external memory. Unlike conventional memory having read/write operations to specific addresses, a differentiable memory has soft read and write operations involving all the data stored in the memory. Such soft read and write operations present new computational challenges for hardware implementation of MANNs. In this work, we present a novel in-memory computing primitive to accelerate the differentiable memory operations of MANNs in SRAMs. We propose a 9T SRAM macro capable of performing both Hamming similarity and dot products (crucial for soft read/write and addressing mechanisms in MANNs). Regarding Hamming similarity, we operate the 9T cell in analog Content-Addressable Memory (CAM) mode by applying the key at the bitlines (RBLs/RBLBs) in each column, and reading out the analog output at the sourceline (SL). To perform dot product operation, the input data is applied at the wordlines, and the current passing through RBLs represents the dot product between the input data and the stored bits. The proposed SRAM array performs computations that reliably match the operations required for a differentiable memory, thereby leading to energy-efficient on-chip acceleration of MANNs. Compared to standard GPU systems, the proposed scheme achieves 43x and 85x performance and energy improvements respectively, for computing the differentiable memory operations.
Supplemental Material
- S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735--1780, 1997.Google ScholarDigital Library
- J. S. F.A. Gers and F. Cummins, "Learning to forget: continual prediction with lstm," IET Conference Proceedings, pp. 850--855(5), January 1999.Google Scholar
- H. Siegelmann and E. Sontag, "On the computational power of neural nets," Journal of Computer and System Sciences, vol. 50, no. 1, pp. 132 -- 150, 1995.Google ScholarDigital Library
- R. Csordás and J. Schmidhuber, "Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control," ArXiv, pp. 1--13, apr 2019.Google Scholar
- J. Weston, S. Chopra, and A. Bordes, "Memory networks," CoRR, vol. abs/1410.3916, 2015.Google Scholar
- A. Graves et al., "Neural turing machines," CoRR, vol. abs/1410.5401, 2014.Google Scholar
- S. Sukhbaatar et al., "End-to-end memory networks," in NIPS, 2015.Google Scholar
- A. Graves et al., "Hybrid computing using a neural network with dynamic external memory," Nature, vol. 538, no. 7626, pp. 471--476, oct 2016.Google ScholarCross Ref
- A. Kumar et al., "Ask me anything: Dynamic memory networks for natural language processing," in Proceedings of The 33rd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. F. Balcan and K. Q. Weinberger, Eds., vol. 48, 2016, pp. 1378--1387.Google Scholar
- J. Weston et al., "Towards ai-complete question answering: A set of prerequisite toy tasks," 2015.Google Scholar
- A. Shafiee et al., "Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars," in Proceedings of the 43rd International Symposium on Computer Architecture, ser. ISCA '16. Piscataway, NJ, USA: IEEE Press, 2016, pp. 14--26.Google Scholar
- Y. Chen et al., "Eyeriss.: An energy-efficient reconfigurable accelerator for deep convolutional neural networks," IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127--138, Jan 2017.Google ScholarCross Ref
- A. Ranjan et al., "X-mann: A crossbar based architecture for memory augmented neural networks," in Proceedings of the 56th Annual Design Automation Conference 2019, ser. DAC '19. New York, NY, USA: Association for Computing Machinery, 2019.Google Scholar
- S. Park et al., "Energy-efficient inference accelerator for memory-augmented neural networks on an fpga," 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1587--1590, 2018.Google Scholar
- H. Jang et al., "Mnnfast: A fast and scalable system architecture for memory-augmented neural networks," in Proceedings of the 46th International Symposium on Computer Architecture, ser. ISCA '19. New York, NY, USA: Association for Computing Machinery, 2019, p. 250--263.Google ScholarDigital Library
- J. R. Stevens et al., "Manna: An accelerator for memory-augmented neural networks," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO '52. New York, NY, USA: Association for Computing Machinery, 2019, p. 794--806.Google ScholarDigital Library
- Zhiyu Liu and V. Kursun, "Characterization of a Novel Nine-Transistor SRAM Cell," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 16, no. 4, pp. 488--492, apr 2008.Google ScholarDigital Library
- Q. Dong et al., "A 4 + 2t sram for searching and in-memory computing with 0.3-v vddmin," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 1006--1015, April 2018.Google ScholarCross Ref
- A. Jaiswal et al., "8t sram cell as a multi-bit dot product engine for beyond von-neumann computing," arXiv preprint arXiv:1802.08601, 2018.Google Scholar
- A. Agrawal et al., "Xcel-ram: Accelerating binary neural networks in high-throughput sram compute arrays," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 66, no. 8, pp. 3064--3076, Aug 2019.Google ScholarCross Ref
- A. Biswas and A. P. Chandrakasan, "Conv-ram: An energy-efficient sram with embedded convolution computation for low-power cnn-based machine learning applications," in Solid-State Circuits Conference-(ISSCC), 2018 IEEE International. IEEE, 2018, pp. 488--490.Google Scholar
- S. Park et al., "Quantized memory-augmented neural networks," in AAAI Conference on Artificial Intelligence, 2018.Google Scholar
- K. D. Choo et al., "27.3 area-efficient 1gs/s 6b sar adc with charge-injection-cell-based dac," in 2016 IEEE International Solid-State Circuits Conference (ISSCC), Jan 2016, pp. 460--461.Google Scholar
- A. Ankit et al., "Puma: A programmable ultra-efficient memristor-based accelerator for machine learning inference," in 2019 International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2019, p. 715--731.Google Scholar
Recommendations
A Novel Memory Block Management Scheme for PCM Using WOM-Code
HPCC-CSS-ICESS '15: Proceedings of the 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security, and 2015 IEEE 12th International Conf on Embedded Software and SystemsPhase Change Memory (PCM) is a promising DRAM replacement in embedded systems due to its attractive characteristics including low static power consumption and high density. However, long write latency is one of the major drawbacks in current PCM ...
WOM-Code Solutions for Low Latency and High Endurance in Phase Change Memory
This paper describes a write-once-memory-code phase change memory (WOM-code PCM) architecture for next-generation non-volatile memory applications. Specifically, we address the long latency of the write operation in PCM—attributed to PCM SET—...
Mellow writes: extending lifetime in resistive memories through selective slow write backs
ISCA'16Emerging resistive memory technologies, such as PCRAM and ReRAM, have been proposed as promising replacements for DRAM-based main memory, due to their better scalability, low standby power, and non-volatility. However, limited write endurance is a major ...
Comments