Loading web-font TeX/Main/Regular
Performance and Accuracy Tradeoffs for Training Graph Neural Networks on ReRAM-Based Architectures | IEEE Journals & Magazine | IEEE Xplore

Performance and Accuracy Tradeoffs for Training Graph Neural Networks on ReRAM-Based Architectures


Abstract:

Graph neural network (GNN) is a variant of deep neural networks (DNNs) operating on graphs. However, GNNs are more complex compared with DNNs as they simultaneously exhib...Show More

Abstract:

Graph neural network (GNN) is a variant of deep neural networks (DNNs) operating on graphs. However, GNNs are more complex compared with DNNs as they simultaneously exhibit attributes of both DNN and graph computations. In this work, we propose a ReRAM-based 3-D manycore processing-in-memory architecture called ReMaGN, tailored for on-chip training of GNNs. ReMaGN implements GNN training using reduced-precision representation to make the computation faster and reduce the load on the communication backbone. However, reduced precision can potentially compromise the accuracy of training. Hence, we undertake a study of performance and accuracy tradeoffs in such architectures. We demonstrate that ReMaGN outperforms conventional GPUs by up to 9.5\times (on average 7.1\times ) in terms of execution time, while being up to 42\times (on average 33.5\times ) more energy efficient without sacrificing accuracy.
Page(s): 1743 - 1756
Date of Publication: 15 September 2021

ISSN Information:

Funding Agency:


References

References is not available for this document.