Abstract:
Deep Neural Networks (DNNs) are often associated with a large number of data-parallel computations. Therefore, data-centric computing paradigms, such as Processing in Mem...Show MoreMetadata
Abstract:
Deep Neural Networks (DNNs) are often associated with a large number of data-parallel computations. Therefore, data-centric computing paradigms, such as Processing in Memory (PIM), are being widely explored for DNN acceleration applications. A recent PIM architecture, developed and commercialized by the UPMEM company, has demonstrated impressive performance boost over traditional CPU-based systems for a wide range of data-parallel applications. However, the application domain of DNN acceleration is yet to be explored on this PIM platform. In this work, we present successful implementations of DNNs on the UPMEM PIM system. We explore multiple operation mapping schemes with different optimization goals and accelerate two CNN algorithms using these schemes. Based on the data achieved from the physical implementation of the DNNs on the UPMEM system, we compare the performance of our DNN implementation with several other recently proposed PIM architecture.
Date of Conference: 05-08 September 2022
Date Added to IEEE Xplore: 10 October 2022
ISBN Information: