Abstract:
Reconfigurable nanotechnologies such as Silicon Nanowire Field Effect Transistors (FETs) serve as a promising technology that not only facilitates lower power consumption...Show MoreMetadata
Abstract:
Reconfigurable nanotechnologies such as Silicon Nanowire Field Effect Transistors (FETs) serve as a promising technology that not only facilitates lower power consumption but also supports multi-functionality through reconfigurability. It enables reconfigurability and supports multiple functionalities per computational unit. These features motivate us to design a novel state-of-the-art energy-efficient hardware accelerator for implementing memory-intensive applications including convolutional neural networks (CNNs) and deep neural networks (DNNs). To accelerate the computations, we design Multiply and Accumulate (MAC) units to perform the computations. For the design of MACs, we employ Silicon nanowire reconfigurable FETs (RFETs). The use of RFETs leads to nearly 70% power reduction compared to the traditional CMOS implementation and also reduced latency in performing the computations. Further to optimize the overheads and improve memory efficiency, we introduce a novel approximation technique for RFETs. The RFET-based approximate adders lead to reduced power, area, and delay while having a minimal impact on the accuracy of the DNN/CNN. In addition, we carry out a detailed study of varied combinations of architectures involving CMOS, RFETs, accurate adders, and approximate adders to demonstrate the benefits of the proposed RFET-based approximate acclerator. The proposed RFET-based accelerator achieves an accuracy of 94% on MNIST datasets with 93% and 73%reduction in the area, power and delay metrics respectively compared to the state-of-the-art hardware accelerator architectures.
Date of Conference: 21-25 May 2023
Date Added to IEEE Xplore: 21 July 2023
ISBN Information: