Abstract
A variety of different algorithms has been proposed for sparse Cholesky factorization. This article investigates shared-memory implementations for several variants in a task-oriented execution model with dynamic scheduling. We concentrate on the degree of parallelism, the scalability, and the scheduling overhead of the different algorithms for relatively large numbers of processors. As execution platform, we use the SB-PRAM, a shared-memory machine with up to 4096 processors. The article can be considered as a case study in which we try to answer the question which performance we can hope to get for a typical irregular application on an ideal machine on which the locality of memory accesses can be ignored but for which the overhead for the management of data structures still takes effect.
Chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
F. Abolhassan, J. Keller, and W.J. Paul. On the Cost-Effectiveness of PRAMS. In Proceeding of the 3rd IEEE Symposium on Parallel and Distributed Processing, pages 2–9, 1991.
C. Ashcraft, R.G. Grimes, J.G. Lewis, B.W. Peyton, and H.D. Simon. Recent Progress in Sparse Matrix Methods for Large Linear Systems. Int. J. Supercomputer Applications, 1(4):10–30, 1987.
I. Duff, R. Grimes, and J. Lewis. User's Guide for the Harwell-Boeing Sparse Matrix Collection (Release I). Technical Report TR/PA/92/86, Boeing Computer Services, 1992.
A. George, J. Liu, and E. Ng. User's Guide for SPARSPAK: Waterloo Sparse Linear Equations Package. Technical Report Research Report CS-78-30, Department of Computer Science, University of Waterloo, 1980.
A. George and J. W-H Liu. Computer Solution of Large Sparse Positive Definite Systems. Prentice-Hall, 1981.
D.E. Lenoski and W. Weber. Scalable Shared-Memory Multiprocessing. Morgan Kaufmann Publishers, Inc, 1995.
Joseph W. H. Liu. The Role of Elimination Trees in Sparse Factorization. SIAM J. Matrix Anal. Appl., 11:134–172, 1990.
E.G. Ng and B.W. Peyton. A Supernodal Cholesky Factorization Algorithm for Shared-Memory Multiprocessors. Mathematical Sciences Section, P.O. Box 2009, Bldg. 9207-A, Oak Ridge, TN 37831-8083, 1991.
T. Rauber, G. Rünger, and C. Scholtes. Scalability of parallel sparse cholesky factorization. Technical Report 02/97, University Saarbrücken, 1997.
E. Rothberg. Alternatives for Solving Sparse Triangular Systems on DistributedMemory Multiprocessors. Parallel Computing, 21:1121–1136, 1995.
E. Rothberg and A. Gupta. An Evaluation of Left-Looking, Right-Looking and Multifrontal Approaches to Sparse Cholesky Factorization on Hierarchical-Memory Machines. Int. J. High Speed Computing, 5(4):537–593, 1993.
J.P. Singh, W.D. Weber, and A. Gupta. SPLASH: Stanford Parallel Applications for Shared-Memory. Computer Architecture News, 20(1):5–44, 1992.
J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer, New York, 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rauber, T., Rünger, G., Scholtes, C. (1997). Scalability of parallel sparse Cholesky factorization. In: Lengauer, C., Griebl, M., Gorlatch, S. (eds) Euro-Par'97 Parallel Processing. Euro-Par 1997. Lecture Notes in Computer Science, vol 1300. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0002801
Download citation
DOI: https://doi.org/10.1007/BFb0002801
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63440-9
Online ISBN: 978-3-540-69549-3
eBook Packages: Springer Book Archive