Elsevier

Parallel Computing

Volume 3, Issue 3, July 1986, Pages 193-204
Parallel Computing

Parallel implementation of multifrontal schemes

https://doi.org/10.1016/0167-8191(86)90019-0Get rights and content

Abstract

We consider the direct solution of large sparse sets of linear equations in an MIMD environment. We base our algorithm on a multifrontal approach and show how to distribute tasks among processors according to an elimination free that can be automatically generated from any pivot strategy. We study organizational aspects of the scheme for shared memory multiprocessor configurations and consider implications for multiprocessors with local memories and a communication network.

References (14)

  • F.J. Peters

    Parallel pivoting algorithms for sparse symmetric matrices

    Parallel Computing

    (1984)
  • I.S. Duff et al.

    Some remarks on inverses of sparse matrices

  • I.S. Duff et al.

    The effect of orderings on the parallelization of sparse codes

  • I.S. Duff et al.

    MA27—A set of Fortran subroutines for solving sparse symmetric sets of linear equations

  • I.S. Duff et al.

    The multifrontal solution of indefinite sparse symmetric linear equations

    ACM Trans. Math. Software

    (1983)
  • I.S. Duff et al.

    A note on the work involved in no-fill sparse matrix factorization

    IMA J. Numer. Anal.

    (1984)
  • I.S. Duff et al.

    The multifrontal solution of unsymmetric sets of linear equations

    SIAM J. Sci. Stat. Comput.

    (1984)
There are more references available in the full text version of this article.

Cited by (84)

  • Cached Gaussian elimination for simulating Stokes flow on domains with repetitive geometry

    2020, Journal of Computational Physics
    Citation Excerpt :

    These methods were replaced with multifrontal methods, which are based on the observation that elimination dependencies take the form of an elimination tree, and a new independent elimination front can be started from each leaf of the tree [24]. Since these fronts are independent, they may be eliminated in parallel [25–28]. A process of amalgamation (also called supernodes) is used to eliminate multiple rows with similar sparsity patterns at the same time to exploit more efficient level-3 BLAS operations [24,29,30].

  • Optimized sparse Cholesky factorization on hybrid multicore architectures

    2018, Journal of Computational Science
    Citation Excerpt :

    The structure of the elimination tree holds information about data dependencies between supernodes. The underlying tree structure [9] can help us determine which supernodes may be factorized concurrently. In our algorithm, we use multithreading (using OpenMP on the CPU) to leverage the task parallelism in the underlying tree structure.

  • Parallel and fully recursive multifrontal sparse Cholesky

    2004, Future Generation Computer Systems
  • The impact of high-performance computing in the solution of linear systems: Trends and problems

    2000, Journal of Computational and Applied Mathematics
    Citation Excerpt :

    An important aspect of these approaches is that the parallelism is obtained directly because of the sparsity in the system. In general, we exhibit this form of parallelism through the assembly tree of Section 4 where operations at nodes which are not on the same (unique) path to the root (that is none is a descendant of another) are independent and can be executed in parallel (see, for example, [37,70]). The set of pivots discussed above could correspond to leaf nodes of such a tree.

  • Multifrontal parallel distributed symmetric and unsymmetric solvers

    2000, Computer Methods in Applied Mechanics and Engineering
View all citing articles on Scopus

Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract W-31-109-Eng-38 when the author was visiting MCS division. Argonne National Laboratory, IL 60439, U.S.A. United Kingdom Atomic Energy Authority.

View full text