skip to main content
10.1145/3293883.3295701acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections

Adaptive sparse matrix-matrix multiplication on the GPU

Published:16 February 2019Publication History

ABSTRACT

In the ongoing efforts targeting the vectorization of linear algebra primitives, sparse matrix-matrix multiplication (SpGEMM) has received considerably less attention than sparse Matrix-Vector multiplication (SpMV). While both are equally important, this disparity can be attributed mainly to the additional formidable challenges raised by SpGEMM.

In this paper, we present a dynamic approach for addressing SpGEMM on the GPU. Our approach works directly on the standard compressed sparse rows (CSR) data format. In comparison to previous SpGEMM implementations, our approach guarantees a homogeneous, load-balanced access pattern to the first input matrix and improves memory access to the second input matrix. It adaptively re-purposes GPU threads during execution and maximizes the time efficient on-chip scratchpad memory can be used. Adhering to a completely deterministic scheduling pattern guarantees bit-stable results during repetitive execution, a property missing from other approaches. Evaluation on an extensive sparse matrix benchmark suggests our approach being the fastest SpGEMM implementation for highly sparse matrices (80% of the set). When bit-stable results are sought, our approach is the fastest across the entire test set.

References

  1. Kadir Akbudak and Cevdet Aykanat. 2017. Exploiting Locality in Sparse Matrix-Matrix Multiplication on Many-Core Architectures. IEEE Transactions on Parallel and Distributed Systems 28, 8 (Aug 2017), 2258--2271.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Rasmus R. Amossen, Andrea Campagna, and Rasmus Pagh. 2010. Better Size Estimation for Sparse Matrix Products. In Proceedings of the 13th International Conference on Approximation, and 14 the International Conference on Randomization, and Combinatorial Optimization: Algorithms and Techniques (APPROX/RANDOM'10). Springer-Verlag, Barcelona, Spain, 406--419. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Pham N. Q. Anh, Rui Fan, and Yonggang Wen. 2016. Balanced Hashing and Efficient GPU Sparse General Matrix-Matrix Multiplication. In Proceedings of the 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 36, 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Grey Ballard, Alex Druinsky, Nicholas Knight, and Oded Schwartz. 2016. Hypergraph Partitioning for Sparse Matrix-Matrix Multiplication. ACM Trans. Parallel Comput. 3, 3, Article 18 (Dec. 2016), 34 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nathan Bell, Steven Dalton, and Luke N. Olson. 2012. Exposing Fine-Grained Parallelism in Algebraic Multigrid Methods. SIAM Journal on Scientific Computing 34, 4 (2012), C123--C152.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Aydin Buluc and John R. Gilbert. 2008. On the representation and multiplication of hypersparse matrices. In 2008 IEEE International Symposium on Parallel and Distributed Processing. IEEE, Miami, FL, USA, 1--11.Google ScholarGoogle Scholar
  7. Steven Dalton, Sean Baxter, Duane Merrill, Luke Olson, and Michael Garland. 2015. Optimizing Sparse Matrix Operations on GPUs Using Merge Path. In 2015 IEEE International Parallel and Distributed Processing Symposium. IEEE, Hyderabad, India, 407--416. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Steven Dalton, Nathan Bell, Luke Olson, and Michael Garland. 2014. Cusp: Generic Parallel Algorithms for Sparse Matrix and Graph Computations. http://cusplibrary.github.io/ Version 0.5.0.Google ScholarGoogle Scholar
  9. Steven Dalton, Luke Olson, and Nathan Bell. 2015. Optimizing Sparse Matrix-Matrix Multiplication for the GPU. ACM Trans. Math. Softw. 41, 4, Article 25 (Oct. 2015), 20 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Timothy A. Davis. 2017. SuiteSparse: ASuite of Sparse matrix packages. http://www.cise.ufl.edu/davis/.Google ScholarGoogle Scholar
  11. Timothy A. Davis and Yifan Hu. 2011. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Softw. 38, 1 (Dec. 2011), 1:1--1:25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Julien Demouth. 2012. Sparse matrix-matrix multiplication on the GPU. Technical Report. NVIDIA.Google ScholarGoogle Scholar
  13. Mehmet Deveci, Christian Trott, and Sivasankaran Rajamanickam. 2017. Performance-portable sparse matrix-matrix multiplication for many-core architectures. In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, Lake Buena Vista, FL, USA, 693--702.Google ScholarGoogle ScholarCross RefCross Ref
  14. Mehmet Deveci, Christian Trott, and Sivasankaran Rajamanickam. 2018. Multithreaded sparse matrix-matrix multiplication for manycore and GPU architectures. Parallel Comput. 78 (oct 2018), 33--46.Google ScholarGoogle ScholarCross RefCross Ref
  15. John R. Gilbert, Cleve Moler, and Robert Schreiber. 1992. Sparse Matrices in MATLAB: Design and Implementation. SIAM J. Matrix Anal. Appl. 13, 1 (1992), 333--356. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Oded Green, Robert McColl, and David A. Bader. 2012. GPU Merge Path: A GPU Merging Algorithm. In Proceedings of the 26th ACM International Conference on Supercomputing (ICS '12). ACM, New York, NY, USA, 331--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Felix Gremse, Andreas Höfter, Lars O. Schwen, Fabian Kiessling, and Uwe Naumann. 2015. GPU-accelerated sparse matrix-matrix multiplication by iterative row merging. SIAM Journal on Scientific Computing 37, 1 (2015), C54--C71.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Fred G. Gustavson. 1978. Two Fast Algorithms for Sparse Matrices: Multiplication and Permuted Transposition. ACM Trans. Math. Softw. 4, 3 (Sept. 1978), 250--269. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Rakshith Kunchum, Ankur Chaudhry, Aravind Sukumaran-Rajam, Qingpeng Niu, Israt Nisa, and P. Sadayappan. 2017. On Improving Performance of Sparse Matrix-matrix Multiplication on GPUs. In Proceedings of the International Conference on Supercomputing (ICS '17). ACM, New York, NY, USA, Article 14, 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Weifeng Liu and Brian Vinter. 2015. A Framework for General Sparse Matrix-matrix Multiplication on GPUs and Heterogeneous Processors. J. Parallel Distrib. Comput. 85, C (Nov. 2015), 47--61. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Duane Merrill. 2015. CUB: CUDA Unbound, a library of warp-wide, block-wide, and device-wide GPU parallel primitives.Google ScholarGoogle Scholar
  22. Yusuke Nagasaka, Akira Nukada, and Satoshi Matsuoka. 2017. High-Performance and Memory-Saving Sparse General Matrix-Matrix Multiplication for NVIDIA Pascal GPU. In 2017 46th International Conference on Parallel Processing (ICPP). IEEE, Bristol, UK, 101--110.Google ScholarGoogle Scholar
  23. NVIDIA. 2019. The API reference guide for cuSPARSE, the CUDA sparse matrix library. (v9.1 ed.). NVIDIA.Google ScholarGoogle Scholar
  24. Md. Mostofa A. Patwary, Nadathur R. Satish, Narayanan Sundaram, Jongsoo Park, Michael J. Anderson, Satya G. Vadlamudi, Dipankar Das, Sergey G. Pudov, Vadim O. Pirogov, and Pradeep Dubey. 2015. Parallel Efficient Sparse Matrix-Matrix Multiplication on Multicore Platforms. Springer International Publishing, Cham, 48--57.Google ScholarGoogle Scholar
  25. Ichitaro Yamazaki and Xiaoye S. Li. 2011. On Techniques to Improve Robustness and Scalability of a Parallel Hybrid Linear Solver. In Proceedings of the 9th International Conference on High Performance Computing for Computational Science (VECPAR'10). Springer-Verlag, Berlin, Heidelberg, 421--434. http://dl.acm.org/citation.cfm?id=1964238.1964281 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Raphael Yuster and Uri Zwick. 2004. Detecting Short Directed Cycles Using Rectangular Matrix Multiplication and Dynamic Programming. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '04). Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 254--260. http://dl.acm.org/citation.cfm?id=982792.982828 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Adaptive sparse matrix-matrix multiplication on the GPU

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        PPoPP '19: Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming
        February 2019
        472 pages
        ISBN:9781450362252
        DOI:10.1145/3293883

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 16 February 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        PPoPP '19 Paper Acceptance Rate29of152submissions,19%Overall Acceptance Rate230of1,014submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader