Loading [a11y]/accessibility-menu.js
BALS: Blocked Alternating Least Squares for Parallel Sparse Matrix Factorization on GPUs | IEEE Journals & Magazine | IEEE Xplore

BALS: Blocked Alternating Least Squares for Parallel Sparse Matrix Factorization on GPUs


Abstract:

Matrix factorization on sparse matrices has been proven to be an effective approach for data mining and machine learning. However, the prior parallel implementations for ...Show More

Abstract:

Matrix factorization on sparse matrices has been proven to be an effective approach for data mining and machine learning. However, the prior parallel implementations for matrix factorization fail to capture the internal social property embedded in real-world use cases. This article presents an efficient implementation of the alternative least squares (ALS) algorithm called BALSbuilt on top of a new sparse matrix format for parallel matrix factorization. The BALS storage format organizes the sparse matrix into 2D tiles to avoid repeated data loads and improve data reuses. We further propose a data reordering technique to sort sparse matrices according to nonzeros. The experimental results show that BALS can yield a superior performance than state-of-the-art implementations, i.e., our BALS generally runs faster than Gates’ implementation over different latent feature sizes, with a speedup of up to 2.08× on K20C, 3.72× on TITAN X and 3.13× on TITAN RTX. When compared with alternative matrix factorization algorithms, our BALS consistently outperforms CDMF, cuMF_CCD, and cuMF_SGD over various latent feature sizes and datasets. The reordering technique can provide an extra improvement of up to 23.68 percent on K20C, 19.87 percent on TITAN X and 20.38 percent on TITAN RTX.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 32, Issue: 9, 01 September 2021)
Page(s): 2291 - 2302
Date of Publication: 09 March 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.