Skip to main content

Advertisement

Log in

Analysis of a sparse hypermatrix Cholesky with fixed-sized blocking

  • Published:
Applicable Algebra in Engineering, Communication and Computing Aims and scope

Abstract

We present the way in which we have constructed an implementation of a sparse Cholesky factorization based on a hypermatrix data structure. This data structure is a storage scheme which produces a recursive 2D partitioning of a sparse matrix. It can be useful on some large sparse matrices. Subblocks are stored as dense matrices. Thus, efficient BLAS3 routines can be used. However, since we are dealing with sparse matrices some zeros may be stored in those dense blocks. The overhead introduced by the operations on zeros can become large and considerably degrade performance. We present the ways in which we deal with this overhead. Using matrices from different areas (Interior Point Methods of linear programming and Finite Element Methods), we evaluate our sequential in-core hypermatrix sparse Cholesky implementation. We compare its performance with several other codes and analyze the results. In spite of using a simple fixed-size partitioning of the matrix our code obtains competitive performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Ashcraft C. and Grimes R.G. (1989). The influence of relaxed supernode partitions on the multifrontal method. ACM Trans. Math. Softw. 15: 291–309

    Article  MATH  Google Scholar 

  2. Ast, M., Barrado, C., Cela, J., Fischer, R., Laborda, O., Manz, H., Schulz, U.: Sparse matrix structure for dynamic parallelisation efficiency. In: Euro-Par 2000, LNCS, vol. 1900, pp. 519–526 (2000)

  3. Ast, M., Fischer, R., Manz, H., Schulz, U.: PERMAS: user’s reference manual. INTES publication no. 450, rev.d (1997)

  4. Badics, T.: RMFGEN generator (1991). ftp://dimacs.rutgers.edu/pub/netflow/generators/ network/genrmf

  5. Carolan W., Hill J., Kennington J., Niemi S. and Wichmann S. (1990). An empirical evaluation of the KORBX algorithms for military airlift applications. Oper. Res. 38: 240–248

    Article  Google Scholar 

  6. Castro J. (2000). A specialized interior-point algorithm for multicommodity network flows. SIAM J. Optim. 10(3): 852–877

    Article  MATH  MathSciNet  Google Scholar 

  7. Czyzyk, J., Mehrotra, S., Wagner, M., Wright, S.J.: PCx user’s guide (version 1.1). Technical Report OTC 96/01, Optimization Technology Center, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne (1997). http://www.mcs.anl.gov/otc/Tools/ PCx

  8. Davis, T.: University of Florida Sparse Matrix Collection. NA Digest 97(23) (1997). http://www.cise.ufl.edu/research/sparse/matrices

  9. Duff, I.S.: Full matrix techniques in sparse Gaussian elimination. In: Numerical Analysis (Dundee, 1981). Lecture Notes in Mathematics, vol. 912, pp. 71–84. Springer, Berlin (1982)

  10. Fischer, R., Ast, M., Manz, H., Labarta, J.: A dynamic task graph parallelization approach. In: Fourth International Colloquium on Computation of Shell and Spatial Structures (2000)

  11. Frangioni, A.: Multicommodity Min Cost Flow problems. Operations Research Group, Department of Computer Science, University of Pisa. http://www.di.unipi.it/di/groups/optimize/Data/

  12. Fuchs G., Roy J. and Schrem E. (1972). Hypermatrix solution of large sets of symmetric positive-definite linear equations. Comp. Meth. Appl. Mech. Eng. 1: 197–216

    Article  MATH  Google Scholar 

  13. Goldberg, A.V., Oldham, J.D., Plotkin, S., Stein, C.: An implementation of a combinatorial approximation algorithm for minimum-cost multicommodity flow. In: Proceedings of the 6th International Conference on Integer Programming and Combinatorial Optimization, IPCO’98 (Houston, June 22–24, 1998), LNCS, vol. 1412, pp. 338–352. Springer, Berlin (1998)

  14. Goldfarb, D., Grigoriadis, M.D.: A computational comparison of the Dinic and network simplex methods for maximum flow. In: Simeone, B., et al. (eds.) FORTRAN Codes for Network Optimization. Annals of Operations Research, vol. 13, pp. 83–124 (1988)

  15. Herrero, J.R., Navarro, J.J.: Improving performance of hypermatrix Cholesky factorization. In: Euro-Par’03, LNCS, vol. 2790, pp. 461–469. Springer, Berlin (2003)

  16. Herrero, J.R., Navarro, J.J.: Optimization of a statically partitioned hypermatrix sparse Cholesky factorization. In: Workshop on State-of-the-Art in Scientific Computing (PARA’04), LNCS, vol. 3732, pp. 798–807. Springer, Berlin (2004)

  17. Herrero, J.R., Navarro, J.J.: Reducing overhead in sparse hypermatrix Cholesky factorization. In: IFIP TC5 Workshop on High Performance Computational Science and Engineering (HPCSE), World Computer Congress, pp. 143–154. Springer, Berlin (2004)

  18. Irony, D., Shklarski, G., Toledo, S.: Parallel and fully recursive multifrontal sparse Cholesky. In: ICCS’02, LNCS, vol. 2330, pp. 335–344. Springer, Berlin (2002)

  19. Karypis G. and Kumar V. (1999). A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20(1): 359–392

    Article  MATH  MathSciNet  Google Scholar 

  20. Koenker, R., Ng, P.: SparseM: A Sparse Matrix Package for R (2003). http://cran.r-project.org/ src/contrib/PACKAGES.html#SparseM

  21. Lee, Y., Orlin, J.: GRIDGEN generator (1991). ftp://dimacs.rutgers.edu/pub/netflow/ generators/network/gridgen

  22. Liu J.H.W. (1990). The role of elimination trees in sparse factorization. SIAM J. Matrix Anal. Appl. 11(1): 134–172

    Article  MATH  MathSciNet  Google Scholar 

  23. Liu J.W., Ng E.G. and Peyton B.W. (1993). On finding supernodes for sparse matrix computations. SIAM J. Matrix Anal. Appl. 14(1): 242–252

    Article  MATH  MathSciNet  Google Scholar 

  24. Navarro, J.J., Juan, A., Lang, T.: MOB forms: A class of Multilevel Block Algorithms for dense linear algebra operations. In: Proceedings of the 8th International Conference on Supercomputing, pp. 354–363. ACM Press, New York (1994)

  25. NetLib: Linear programming problems. http://www.netlib.org/lp/

  26. Ng E.G. and Peyton B.W. (1993). Block sparse Cholesky algorithms on advanced uniprocessor computers. SIAM J. Sci. Comput. 14(5): 1034–1056

    Article  MATH  MathSciNet  Google Scholar 

  27. Noor A. and Voigt S. (1975). Hypermatrix scheme for the STAR–100 computer. Comp. Struct. 5: 287–296

    Article  Google Scholar 

  28. PARASOL: (EU ESPRIT IV LTR Project No. 20160). http://www.parallab.uib.no/projects/ parasol/

  29. Rothberg E. (1996). Performance of panel and block approaches to sparse Cholesky factorization on the iPSC/860 and Paragon multicomputers. SIAM J. Sci. Comput. 17(3): 699–713

    Article  MATH  MathSciNet  Google Scholar 

  30. Rothberg E. and Gupta A. (1993). An evaluation of left-looking, right-looking and multifrontal approaches to sparse Cholesky factorization on hierarchical-memory machines. Int. J. High Speed Comput. 5(4): 537–593

    Article  Google Scholar 

  31. Rothberg E. and Gupta A. (1994). An efficient block-oriented approach to parallel sparse Cholesky factorization. SIAM J. Sci. Comput. 15(6): 1413–1439

    Article  MATH  MathSciNet  Google Scholar 

  32. Whaley, R.C., Dongarra, J.J.: Automatically tuned linear algebra software. In: Supercomputing ’98, pp. 211–217. IEEE Computer Society (1998)

  33. Woo, S.C., Ohara, M., Torrie, E., Singh, J.P., Gupta, A.: The SPLASH-2 programs: characterization and methodological considerations. In: Proceedings of the 22nd Annual International Symposium on Computer Architecture, pp. 24–36. ACM Press, New York (1995). http://doi.acm.org/10.1145/223982.223990

  34. Zhang Y. (1998). Solving large-scale linear programs by interior-point methods under the MATLAB environment. Optim. Methods Softw. 10(1): 1–31

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José R. Herrero.

Additional information

This work was supported by the Ministerio de Educación y Ciencia of Spain (TIN2004-07739-C02-01).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Herrero, J.R., Navarro, J.J. Analysis of a sparse hypermatrix Cholesky with fixed-sized blocking. AAECC 18, 279–295 (2007). https://doi.org/10.1007/s00200-007-0039-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00200-007-0039-8

Keywords