Skip to main content

Advertisement

Log in

DRGN: a dynamically reconfigurable accelerator for graph neural networks

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Graph neural networks (GNNs) have achieved great success in processing non-Euclidean geometric spatial data structures. However, the irregular memory access of aggregation and the power-law distribution of the real-world graph challenge the existing memory hierarchy and caching policy of CPUs and GPUs. Meanwhile, after the emergence of an increasing number of GNN algorithms, higher requirements have been established for the flexibility of the hardware architecture. In this work, we design a dynamically reconfigurable GNN accelerator (named DRGN) supporting multiple GNN algorithms. Specifically, we first propose a vertex reordering algorithm and an adjacency matrix compressing algorithm to improve the graph data locality. Furthermore, to improve bandwidth utilization and the reuse rate of node features, we proposed a dedicatedly designed prefetcher to significantly improve hit rate. Finally, we proposed a scheduling mechanism to assign tasks to PE units to address the issue of workload imbalance. The effectiveness of proposed DRGN accelerator was evaluated using three GNN algorithms, including PageRank, GCN, and GraphSage. Compared to the execution time of these three GNN algorithms on CPU, performing PageRank algorithm on DRGN can achieve speedup by 231×, the GCN algorithm can achieve speedup by 150× on DRGN, and the GraphSage algorithm can achieve speedup by 39× when executed on DRGN. Compared with state-of-the-art GNN accelerators, DRGN can achieve higher energy-efficiency under the condition of relative lower-end process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability statement

The datasets used and analyzed during the current study are available in the DRGN-dataset repository [https://github.com/Haley-hkb/DRGN-dataset].

References

  • Abadal S, Jain A, Guirado R, López-Alonso J, Alarcón E (2021) Computing graph neural networks: a survey from algorithms to accelerators. ACM Comput Surv (CSUR) 54(9):1–38

    Article  Google Scholar 

  • Auten A, Tomei M, Kumar R (2020) Hardware acceleration of graph neural networks. In: Paper presented at the 2020 57th ACM/IEEE design automation conference (DAC)

  • Broder A, Kumar R, Maghoul F, Raghavan P, Rajagopalan S, Stata R et al (2011) Graph structure in the web. In: The structure and dynamics of networks. Princeton University Press, Princeton, p 183–194

  • Chang X, Nie F, Wang S, Yang Y, Zhou X, Zhang C (2015) Compound rank-k projections for bilinear analysis. IEEE Trans Neural Netw Learn Syst 27(7):1502–1513

    Article  MathSciNet  Google Scholar 

  • Chen K, Yao L, Zhang D, Wang X, Chang X, Nie F (2019) A semisupervised recurrent convolutional attention model for human activity recognition. IEEE Trans Neural Netw Learn Syst 31(5):1747–1756

    Article  Google Scholar 

  • Collins M D, Liu J, Xu J, Mukherjee L, Singh V (2014) Spectral clustering with a convex regularizer on millions of images. In: Paper presented at the European conference on computer vision

  • Dahlgren F, Dubois M, Stenstrom P (1995) Sequential hardware prefetching in shared-memory multiprocessors. IEEE Trans Parallel Distrib Syst 6(7):733–746

    Article  Google Scholar 

  • Dettmers T, Minervini P, Stenetorp P, Riedel S (2018) Convolutional 2D knowledge graph embeddings. In: Paper presented at the Thirty-second AAAI conference on artificial intelligence

  • Duvenaud D, Maclaurin D, Aguilera-Iparraguirre J, Gómez-Bombarelli R, Hirzel T, Aspuru-Guzik A, Adams R P (2015) Convolutional networks on graphs for learning molecular fingerprints. arXiv preprint arXiv:1509.09292

  • Fey M, Lenssen J E (2019) Fast graph representation learning with PyTorch geometric. arXiv preprint arXiv:1903.02428

  • Geng T, Li A, Shi R, Wu C, Wang T, Li Y et al (2020) AWB-GCN: a graph convolutional network accelerator with runtime workload rebalancing. In: Paper presented at the 2020 53rd Annual IEEE/ACM international symposium on microarchitecture (MICRO)

  • Geng T, Wu C, Zhang Y, Tan C, Xie C, You H et al (2021) I-GCN: a graph convolutional network accelerator with runtime locality enhancement through islandization. In: Paper presented at the MICRO-54: 54th annual IEEE/ACM international symposium on microarchitecture

  • Gilmer J, Schoenholz S S, Riley P F, Vinyals O, Dahl GE (2017) Neural message passing for quantum chemistry. In: Paper presented at the International conference on machine learning

  • Hamilton W L, Ying R, Leskovec J (2017) Inductive representation learning on large graphs. In: Paper presented at the Proceedings of the 31st international conference on neural information processing systems

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), p 770–778

  • Jouppi N P, Young C, Patil N, Patterson D, Agrawal G, Bajwa R et al (2017) In-datacenter performance analysis of a tensor processing unit. In: Paper presented at the Proceedings of the 44th annual international symposium on computer architecture

  • Kiningham K, Levis P, Ré C (2020a) GReTA: hardware optimized graph processing for GNNs. In: Paper presented at the Proceedings of the workshop on resource-constrained machine learning (ReCoML 2020a)

  • Kiningham K, Re C, Levis P (2020b) GRIP: a graph neural network accelerator architecture. arXiv preprint arXiv:2007.13828

  • Kipf T N, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907

  • Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  • Li Y, Yu R, Shahabi C, Liu Y (2017) Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926

  • Li Z, Nie F, Chang X, Nie L, Zhang H, Yang Y (2018a) Rank-constrained spectral clustering with flexible embedding. IEEE Trans Neural Netw Learn Syst 29(12):6073–6082

    Article  MathSciNet  Google Scholar 

  • Li Z, Nie F, Chang X, Yang Y, Zhang C, Sebe N (2018b) Dynamic affinity graph construction for spectral clustering using multiple features. IEEE Trans Neural Netw Learn Syst 29(12):6323–6332

    Article  MathSciNet  Google Scholar 

  • Li Z, Yao L, Chang X, Zhan K, Sun J, Zhang H (2019) Zero-shot event detection via event-adaptive concept relevance mining. Pattern Recogn 88:595–603

    Article  Google Scholar 

  • Liang S, Wang Y, Liu C, He L, Huawei L, Xu D, Li X (2020) ENGN: a high-throughput and energy-efficient accelerator for large graph neural networks. IEEE Trans Comput

  • Luo M, Chang X, Nie L, Yang Y, Hauptmann AG, Zheng Q (2017) An adaptive semisupervised feature analysis for video semantic recognition. IEEE Trans Cybern 48(2):648–660

    Article  Google Scholar 

  • Marcheggiani D, Titov I (2017) Encoding sentences with graph convolutional networks for semantic role labeling. arXiv preprint arXiv:1703.04826

  • Ni B, Yan S, Kassim A (2010) Learning a propagable graph for semisupervised learning: classification and regression. IEEE Trans Knowl Data Eng 24(1):114–126

    Google Scholar 

  • Petroni F, Querzoni L, Daudjee K, Kamali S, Iacoboni G (2015) HDRF: stream-based partitioning for power-law graphs. In: Paper presented at the Proceedings of the 24th ACM international on conference on information and knowledge management

  • Pugsley S H, Chishti Z, Wilkerson C, Chuang P-f, Scott R L, Jaleel A et al (2014) Sandbox prefetching: Safe run-time evaluation of aggressive prefetchers. In: Paper presented at the 2014 IEEE 20th international symposium on high performance computer architecture (HPCA)

  • Rabaey JM, Chandrakasan AP, Nikolić B (2003) Digital integrated circuits: a design perspective, vol 7. Pearson Education, Upper Saddle River

    Google Scholar 

  • Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2009) The graph neural network model. IEEE Trans Neural Netw 20(1):61

    Article  Google Scholar 

  • Scarselli F, Yong S L, Gori M, Hagenbuchner M, Tsoi A C, Maggini M (2005) Graph neural networks for ranking web pages. In: Paper presented at the The 2005 IEEE/WIC/ACM international conference on web intelligence (WI'05)

  • Shchur O, Mumme M, Bojchevski A, Günnemann S (2018) Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868

  • Stanton I, Kliot G. (2012). Streaming graph partitioning for large distributed graphs. In: Paper presented at the Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining

  • Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM (2019b) Dynamic graph cnn for learning on point clouds. ACM Trans Graphics (TOG) 38(5):1–12

    Article  Google Scholar 

  • Wang M, Yu L, Zheng D, Gan Q, Gai Y, Ye Z et al(2019a) Deep graph library: towards efficient and scalable deep learning on graphs

  • Wu Z, Pan S, Chen F, Long G, Zhang C, Philip SY (2020) A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst 32(1):4–24

    Article  MathSciNet  Google Scholar 

  • Yan C, Chang X, Luo M, Zheng Q, Zhang X, Li Z, Nie F (2020a) Self-weighted robust LDA for multiclass classification with edge classes. ACM Trans Intell Syst Technol (TIST) 12(1):1–19

    Google Scholar 

  • Yan M, Deng L, Hu X, Liang L, Feng Y, Ye X, . . . Xie Y. (2020b). Hygcn: a GCN accelerator with hybrid architecture. In: Paper presented at the 2020b IEEE international symposium on high performance computer architecture (HPCA)

  • Yang C, Wang Y, Wang X, Geng L (2019) WRA: A 2.2-to-6.3 TOPS highly unified dynamically reconfigurable accelerator using a novel Winograd decomposition algorithm for convolutional neural networks. IEEE Trans Circuits Syst I Regul Pap 66(9):3480–3493

    Article  Google Scholar 

  • Yang H (2019) Aligraph: a comprehensive graph neural network platform. In: Paper presented at the Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining

  • Yazdani R, Ruwase O, Zhang M, He Y, Arnau J-M, González A (2019) Lstm-sharp: an adaptable, energy-efficient hardware accelerator for long short-term memory. arXiv preprint arXiv:1911.01258

  • Ying R, He R, Chen K, Eksombatchai P, Hamilton W L, Leskovec J (2018a) Graph convolutional neural networks for web-scale recommender systems. In: Paper presented at the proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining

  • Ying R, You J, Morris C, Ren X, Hamilton W L, Leskovec J (2018b) Hierarchical graph representation learning with differentiable pooling. arXiv preprint arXiv:1806.08804

  • Zeng H, Prasanna V (2020) GraphACT: Accelerating GCN training on CPU-FPGA heterogeneous platforms. In: Paper presented at the proceedings of the 2020 ACM/SIGDA international symposium on field-programmable gate arrays

  • Zhang D, Yao L, Chen K, Wang S, Chang X, Liu Y (2019) Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE Trans Cybern 50(7):3033–3044

    Article  Google Scholar 

  • Zhang B, Zeng H, Prasanna V (2020a) Hardware acceleration of large scale GCN inference. In: Paper presented at the 2020a IEEE 31st international conference on application-specific systems, architectures and processors (ASAP)

  • Zhang Z, Cui P, Zhu W (2020b) Deep learning on graphs: a survey. IEEE Trans Knowl Data Eng

  • Zhou R, Chang X, Shi L, Shen Y-D, Yang Y, Nie F (2019) Person reidentification via multi-feature fusion with adaptive graph learning. IEEE Trans Neural Netw Learn Syst 31(5):1592–1601

    Article  Google Scholar 

  • Zhou J, Cui G, Hu S, Zhang Z, Yang C, Liu Z et al (2020) Graph neural networks: a review of methods and applications. AI Open 1:57–81

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62176206, and in part by the Aeronautical Science Foundation of China under Grant 2020Z066070001, and in part by Key-Area Research and Development Program of Guangdong Province under Grant 2019B010154002.

Funding

National Natural Science Foundation of China (grant no. 62176206).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Chen Yang or Kai‑Bo Huo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, C., Huo, K., Geng, LF. et al. DRGN: a dynamically reconfigurable accelerator for graph neural networks. J Ambient Intell Human Comput 14, 8985–9000 (2023). https://doi.org/10.1007/s12652-022-04402-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-022-04402-x

Keywords

Navigation