SSOR and ASSOR preconditioners for Block–Broyden method

https://doi.org/10.1016/j.amc.2006.09.107Get rights and content

Abstract

Solving nonlinear equations is a problem that needs to be dealt with in the practical engineering application. This paper uses Block–Broyden method for solving large-scale nonlinear systems, and two preconditioners are applied for solving the underlying linear systems, including SSOR preconditioner as well as ASSOR method, which is based on SSOR. It discusses their implementation processes and compares the two algorithms from different aspects. Finally, it solves the nonlinear systems arising from the Bratu problem. Experimental results show that the preconditioning technique is effective for the Block–Broyden method and that the preconditioner ASSOR has better performance as a whole. Therefore, it can be used in the large-scale problems arising from scientific and engineering computing.

Introduction

With the rapid development of mathematics and computer science, the research on solving nonlinear systems has gone largely noticed. In the past few years, a number of books entirely devoted to iterative methods for nonlinear systems have appeared [1], [2], [3], [4]. The most common iterative methods include stationary methods such as Jacobi, Gause-Seide, SOR and nonstationary methods such as CG, MINRES, GMRES, BiCG and so on. However, these methods have two problems in common: One is that these algorithms usually need much storage and converge very slowly. The other is that they suffer from serious limitations, such as lack of good reliability. These problems have made the algorithms difficult to be applied to practical engineering computing. Therefore, much work has been done to overcome these limitations.

Brown and Saad [5] proposed the nonlinear GMRES (m) method in 1990, which is a widespread algorithm at present. By storing a matrix with the dimension of (m + 1), the algorithm only needs very little storage, whereas the nonlinear property of the question has made it difficult to devise parallel programs. A Block–Broyden (BB) algorithm was first proposed in 1997 with the proof of its local convergence [6]. The complexity, parallel performance and the storage requirement of the algorithm are discussed in [7]. It showed theoretically that this algorithm has advantages of effectiveness, high parallelism as well as low storage, and then discussed the practical use of the algorithm by applying it to the parallel machine SGI Power Challenge 12 CPU. The numerical results, in accordance with the theoretical analysis, demonstrated the unique advantage and practical prospect of the algorithm. However, as pointed out in paper [6], [7], the iterative matrix in the Block–Broyden algorithm is a block diagonal matrix so that partial relevant information among the nodes is lost, affecting the convergence speed of the algorithm to some extent. Hence, seeking for proper preconditioning methods is one of the effective ways to solve this problem. Some preconditioners have been proposed and discussed in Refs. [8], [9], [10].

With the realization that preconditioning is essential for the successful use of iterative methods, research on preconditioners has moved to center stage in recent years. The first use of the term in connection with iterative methods was found by Evans in a paper on Chebyschev acceleration of SSOR in 1968. However, the concept of preconditioning as a way of improving the convergence of iterative methods is much older. As far back as 1845 Jacobi’s method was known as an effective way to ensure the convergence of the simple iterative scheme. Preconditioning, as a means of reducing the condition number in order to improve convergence of an iterative process, seems to have been first considered by Cesari in 1937. A major breakthrough took place around the mid-1970s, with the introduction by Meijerink and van der Vorst of the incomplete Cholesky-conjugate gradient (ICCG) algorithm. Recently, BILUM preconditioner, multilevel preconditioner [11] as well as other preconditioning methods [12], [13], [14] were proposed and the performance of iterative methods combined with preconditioning techniques has been analyzed in Refs. [15], [16], [17]. In general, a good preconditioner should at least meet two requirements. One is that the preconditioner should be cheap to construct and apply. The other is that the preconditioned system should be easy to solve, for counteracting the cost of constructing the preconditioner.

In this article, we investigate SSOR preconditioner for the Block–Broyden method. In order to take less time to calculate the inverse of the preconditioner, we use a diagonal matrix as an approximation of the inverse. We name this method as Approximate-SSOR method (ASSOR), which is based on the ideas arising from the SSOR preconditioner combined with Block–Broyden method. Our purpose is to compare and analyze these two methods.

The rest of the paper is organized as follows. Section 2 introduces relevant knowledge on Block–Broyden Algorithm and preconditioning techniques. SSOR preconditioner is introduced in Section 3. The ASSOR preconditioning method is proposed in Section 4. Section 5 gives general remarks on preconditioning methods based on Block–Broyden algorithm, shows the implementation details and analyzes each method from different aspects. Some numerical results and interpretation of these results are included in Section 6. Section 7 contains the summary remarks.

Section snippets

Block–Broyden algorithm

In this paper we are concerned with the problem of solving the large system of nonlinear equationsF(x)=0,where F(x) = (f1,  , fn)T is a nonlinear operator from Rn to Rn. Suppose that it is possible to generate a new approximation xk of x for k = 0, 1 … and the components of x and F are divided into q blocksF=F1Fqx=x1xq.We define a block diagonal Broyden matrix as the following form:Bk=B1k00Bqk=diag(B1k,,Bqk).

The Block–Broyden algorithm applied to (1) could be summarized as follows (see Algorithm

SSOR preconditioners

The SSOR preconditioner can be derived from the coefficient matrix without any work. If the original, symmetric, matrix is decomposed asA=D+L+LTin its diagonal, lower and upper triangular part, the SSOR matrix is defined asM=(D+L)D-1(D+L)T.

Then we need to calculate the inverse of the preconditioner and get the transformed linear system M−1Ax = M−1b that has the same solution as (3).

The SSOR matrix is given in factored form, so this preconditioner shares many properties of other

Approximate-SSOR preconditioning method

In this section, we describe an approximate inverse of SSOR preconditioner and propose the Approximate-SSOR preconditioning (ASSOR) method.

Preconditioning methods for Block–Broyden algorithm

This Section introduces Block–Broyden algorithm combined with several Preconditioning methods (BBP). As introduced in Section 2.1, Block–Broyden(BB) algorithm uses iterative methods without preconditioning for solving the underlying linear system (2), whereas BBP algorithm applies different preconditioners to solve linear systems. Section 5.1 gives the description of BBP algorithm, Section 5.2 shows the implementation details, and Section 5.3 analyzes two BBP algorithms from different aspects.

Numerical experiments

In the numerical experiments, we compare the four indexes of SSOR and ASSOR preconditioning methods for solving a Bratu problem in computational physics. The programming language is C++, using double float variables to calculate the problem.

The nonlinear partial differential equation for this problem can be written as-Δu+ux+λeu=f,u|Ω=0(x,y)Ω=[0,1]×[0,1]on the unit square Ω of R2 with zero Dirichlet boundary conditions. It is known as the Bratu problem and has been used as a test problem by

Conclusions

We have proposed the SSOR and ASSOR preconditioners for Block–Broyden method to solve nonlinear systems arising from scientific and engineering computing. ASSOR is based on the ideas arising from the SSOR preconditioner. Theoretical analyses and numerical experiments show that though ASSOR method needs more times of iterations than SSOR, it needs less CPU time, smaller construction cost and lower storage need. Therefore, ASSOR algorithm has better performance than SSOR method as a whole.

On the

References (18)

There are more references available in the full text version of this article.

Cited by (2)

The work was supported by the Natural Science Foundation of Jiangsu Province under Grant Nos. BK2004218 and BK2003106, and also by the PanDeng Project of NUPT.

View full text