Skip to main content

A Sparse Matrix Optimization Method for Graph Neural Networks Training

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Abstract

Graph neural networks (GNN) have shown great application potential in scientific research applications, biomedicine, and other fields, which exhibit superior feature representation capabilities for graph data with non-Euclidean structures. These capabilities are enabled efficiently by sparse matrix-matrix multiplication (SPMM) and sparse matrix-vector multiplication (SPMV) that operate on sparse matrix representations of graph structures. However, SpMM has the characteristics of high memory occupation and irregular memory access, which leads to low storage and computational efficiency. To address the above issues, this paper proposes a sparse matrix optimization method, including a sparse matrix format and a performance model. The format, namely BMCOO, divides the sparse matrix into multiple blocks and adopts the bitmap to compress the position information of non-zero elements in each block. This paper further designs an SpMV algorithm in BMCOO format on GPU. In addition, a multi-channel SpMV performance model is constructed to predict the execution time of SpMV by combining the sparse matrix scale and system architecture parameters. Then the performance model fine-tunes the graph partitioning of the GNN training process. Experiments on the SuiteSparse and the Open Graph Benchmark datasets verify the effectiveness and superiority of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Asanovic, K., et al.: The landscape of parallel computing research: a view from Berkeley. EECS Department, University of California, Berkeley, Tech. rep. (2006)

    Google Scholar 

  2. Davis, T.A., Hu, Y.: The university of Florida sparse matrix collection. ACM Trans. Math. Softw. (TOMS) 38(1), 1–25 (2011)

    MathSciNet  MATH  Google Scholar 

  3. Hu, W., et al.: Open graph benchmark: datasets for machine learning on graphs. Adv. Neural. Inf. Process. Syst. 33, 22118–22133 (2020)

    Google Scholar 

  4. Huang, G., Dai, G., Wang, Y., Yang, H.: GE-SPMM: general-purpose sparse matrix-matrix multiplication on GPUs for graph neural networks. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–12. IEEE (2020)

    Google Scholar 

  5. Im, E.J., Yelick, K., Vuduc, R.: Sparsity: optimization framework for sparse matrix kernels. Int. J. High Perform. Comput. Appl. 18(1), 135–158 (2004)

    Article  Google Scholar 

  6. Karakasis, V., Gkountouvas, T., Kourtis, K., Goumas, G., Koziris, N.: An extended compression format for the optimization of sparse matrix-vector multiplication. IEEE Trans. Parall. Distrib. Syst. 24(10), 1930–1940 (2012)

    Article  MATH  Google Scholar 

  7. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20(1), 359–392 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Liu, W., Vinter, B.: CSR5: an efficient storage format for cross-platform sparse matrix-vector multiplication. In: Proceedings of the 29th ACM on International Conference on Supercomputing, pp. 339–350 (2015)

    Google Scholar 

  9. Merrill, D., Garland, M.: Merge-based parallel sparse matrix-vector multiplication. In: SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 678–689. IEEE (2016)

    Google Scholar 

  10. Mu, Z., Tang, S., Zong, C., Yu, D., Zhuang, Y.: Graph neural networks meet with distributed graph partitioners and reconciliations. Neurocomputing 518, 408–417 (2023)

    Article  Google Scholar 

  11. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)

    Article  MathSciNet  Google Scholar 

  12. Yan, S., Li, C., Zhang, Y., Zhou, H.: YASPMV: yet another SPMV framework on GPUs. Acm Sigplan Notices 49(8), 107–118 (2014)

    Article  Google Scholar 

  13. Yao, T., et al.: Very short-term forecasting of distributed PV power using GSTANN. CSEE J. Power Energy Syst. (2022)

    Google Scholar 

  14. Zhao, Q., Yang, J., Wang, Z., Chu, Y., Shan, W., Tuhin, I.A.K.: Clustering massive-categories and complex documents via graph convolutional network. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021. LNCS (LNAI), vol. 12815, pp. 27–39. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82136-4_3

    Chapter  Google Scholar 

  15. Zhao, Y., Li, J., Liao, C., Shen, X.: Bridging the gap between deep learning and sparse matrix format selection. In: Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 94–108 (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by National Key R &D Program of China (No. 2021ZD0110403). We would like to thank the MindSpore team for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jue Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yao, T. et al. (2023). A Sparse Matrix Optimization Method for Graph Neural Networks Training. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14117. Springer, Cham. https://doi.org/10.1007/978-3-031-40283-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40283-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40282-1

  • Online ISBN: 978-3-031-40283-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics