Abstract
With the increasing of matrix size in large-scale data analysis, a series of Spark-based distributed matrix computation systems have emerged. Typically, these systems split a matrix into matrix blocks and save these matrix blocks into a RDD. To implement matrix operations, these systems manipulate the matrices by applying coarse-grained RDD operations. That is, these systems load the entire RDD to get a part of matrix blocks. Hence, it may cause the redundant IO when running SGD-based algorithms, since SGD only samples a min-batch data. Moreover, these systems typically employ a hash scheme to partition matrix blocks, which is oblivious to the sampling semantics. In this work, we propose a sampling-aware data loading which uses fine-grained RDD operation to reduce the partitions without sampled data, so as to decrease the redundant IO. Moreover, we exploit a semantic-based partition scheme, which gathers sampled blocks into the same partitions, to further reduce the number of accessed partitions. We modify SystemDS to implement Emacs, efficient matrix computation for SGD-based algorithms on Apache Spark. Our experimental results show that Emacs outperforms existing Spark-based matrix computation systems by 37%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Boehm, M., et al.: On optimizing operator fusion plans for large-scale machine learning in systemml. In: PVLDB, pp. 1755–1768 (2018)
Boehm, M., et al.: Systemds: a declarative machine learning system for the end-to-end data science lifecycle. In: CIDR (2020)
Böhm, M., et al.: Systemml’s optimizer: plan generation for large-scale machine learning programs. IEEE Data Eng. Bull. 37(3), 52–62 (2014)
Brown, P.G.: Overview of SciDB: large scale array storage, processing and analysis. In: SIGMOD, pp. 963–968 (2010)
Davoudian, A., et al.: A workload-adaptive streaming partitioner for distributed graph stores. Data Sci. Eng. 6(2), 163–179 (2021)
Han, D., et al.: Distme: a fast and elastic distributed matrix computation engine using gpus. In: SIGMOD, pp. 759–774 (2019)
Hellerstein, J.M., et al.: The madlib analytics library or MAD skills, the SQL. In: PVLDB, pp. 1700–1711 (2012)
Meng, X., et al.: Mllib: machine learning in apache spark. JMLR, 34:1–34:7 (2016)
Onizuka, M., et al.: Graph partitioning for distributed graph processing. Data Sci. Eng. 2(1), 94–105 (2017)
ScaLAPACK: http://www.netlib.org/scalapack/
Thomas, A., Kumar, A.: A comparative evaluation of systems for scalable linear algebra-based analytics. Proc. VLDB Endowment. 11(13), 2168–2182 (2018)
Wang, Y.R., et al.: SPORES: sum-product optimization via relational equality saturation for large scale linear algebra. PVLDB, 1919–1932 (2020)
Yu, L., et al.: Exploiting matrix dependency for efficient distributed matrix computation. In: SIGMOD, pp. 93–105 (2015)
Yu, Y., et al.: In-memory distributed matrix computation processing and optimization. In: ICDE, pp. 1047–1058 (2017)
Zaharia, M., et al.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: NSDI, pp. 15–28 (2012)
Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 61902128), Shanghai Sailing Program (No. 19YF1414200).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Han, B., Chen, Z., Xu, C., Zhou, A. (2022). Efficient Matrix Computation for SGD-Based Algorithms on Apache Spark. In: Bhattacharya, A., et al. Database Systems for Advanced Applications. DASFAA 2022. Lecture Notes in Computer Science, vol 13245. Springer, Cham. https://doi.org/10.1007/978-3-031-00123-9_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-00123-9_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00122-2
Online ISBN: 978-3-031-00123-9
eBook Packages: Computer ScienceComputer Science (R0)